System and method for transmitting data in storage controllers

Information

  • Patent Grant
  • 9201599
  • Patent Number
    9,201,599
  • Date Filed
    Monday, July 19, 2004
    19 years ago
  • Date Issued
    Tuesday, December 1, 2015
    8 years ago
Abstract
A method and system for transferring frames from a storage device to a host system via a controller is provided. The method includes transferring frames from a transport module to a link module; and sending an acknowledgment to the transport module, wherein the link module sends the acknowledgement to the transport module and it appears to the transport module as if the host system sent the acknowledgement. The frames in the controller are tracked by creating a status entry indicating that a new frame is being created; accumulating data flow information, while a connection to transfer the frame is being established by a link module; and updating frame status as frame build is completed, transferred, and acknowledged. The controller includes, a header array in a transport module of the controller, wherein the header array includes plural layers and one of the layers is selected to process a frame.
Description
BACKGROUND OF THE INVENTION

1. Field Of the Invention


The present invention relates generally to storage device controllers, and more particularly, to efficiently reading and writing data.


2. Background


Conventional computer systems typically include several functional components. These components may include a central processing unit (CPU), main memory, input/output (“I/O”) devices, and streaming storage devices (for example, tape drives) (referred to herein as “storage device”).


In conventional systems, the main memory is coupled to the CPU via a system bus or a local memory bus. The main memory is used to provide the CPU access to data and/or program information that is stored in main memory at execution time. Typically, the main memory is composed of random access memory (RAM) circuits. A computer system with the CPU and main memory is often referred to as a host system.


The storage device is coupled to the host system via a controller that handles complex details of interfacing the storage device to the host system. Communications between the host system and the controller is usually provided using one of a variety of standard I/O bus interfaces.


Typically, when data is read from a storage device, a host system sends a read command to the controller, which stores the read command into a buffer memory. Data is read from the device and stored in the buffer memory.


Various standard interfaces are used to move data from host systems to storage devices. Fibre channel is one such standard. Fibre channel (incorporated herein by reference in its entirety) is an American National Standard Institute (ANSI) set of standards, which provides a serial transmission protocol for storage and network protocols such as HIPPI, SCSI, IP, ATM and others. Fibre channel provides an input/output interface to meet the requirements of both channel and network users.


Host systems often communicate with storage systems using the “PCI” bus interface. PCI stands for Peripheral Component Interconnect, a local bus standard that was developed by Intel Corporation®. The PCI standard is incorporated herein by reference in its entirety. Most modern computing systems include a PCI bus in addition to a more general expansion bus (e.g. the ISA bus). PCI is a 64-bit bus and can run at clock speeds of 33 or 66 MHz.


PCI-X is a standard bus that is compatible with existing PCI cards using the PCI bus. PCI-X improves the data transfer rate of PCI from 132 MBps to as much as 1 GBps. The PCI-X standard (incorporated herein by reference in its entirety) was developed by IBM®, Hewlett Packard Corporation® and Compaq Corporation® to increase performance of high bandwidth devices, such as Gigabit Ethernet standard and Fibre Channel Standard, and processors that are part of a cluster.


The iSCSI standard (incorporated herein by reference in its entirety) is based on Small Computer Systems Interface (“SCSI”), which enables host computer systems to perform block data input/output (“I/O”) operations with a variety of peripheral devices including disk and tape devices, optical storage devices, as well as printers and scanners.


A traditional SCSI connection between a host system and peripheral device is through parallel cabling and is limited by distance and device support constraints. For storage applications, iSCSI was developed to take advantage of network architectures based on Fibre Channel and Gigabit Ethernet standards. iSCSI leverages the SCSI protocol over established networked infrastructures and defines the means for enabling block storage applications over TCP/IP networks. iSCSI defines mapping of the SCSI protocol with TCP/IP. The iSCSI architecture is based on a client/server model. Typically, the client is a host system such as a file server that issues a read or write command. The server may be a disk array that responds to the client request.


Serial ATA (“SATA”) is another standard, incorporated herein by reference in its entirety that has evolved from the parallel ATA interface for storage systems. SATA provides a serial link with a point-to-point connection between devices and data transfer can occur at 150 megabytes per second.


Another standard that has been developed is Serial Attached Small Computer Interface (“SAS”), incorporated herein by reference in its entirety. The SAS standard allows data transfer between a host system and a storage device. SAS provides a disk interface technology that leverages SCSI, SATA, and fibre channel interfaces for data transfer. SAS uses a serial, point-to-point topology to overcome the performance barriers associated with storage systems based on parallel bus or arbitrated loop architectures.


Conventional controllers are not designed to efficiently handle high throughput that is required by new and upcoming standards. For example, conventional controllers do not keep track of frame status, from the time when a frame build occurs to the time when the frame is transmitted. Also, if an error occurs during frame transmission, conventional controllers are not able to process frames from a known point.


Conventional controllers often have poor performance because they wait for a host to acknowledge receipt of a frame. A host does this by sending an ACK (acknowledgement) frame or a “NAK” (non-acknowledgement frame). Often this delays frame processing because when a host receives a frame it may choose to acknowledge the frame immediately or after a significant amount of time.


Therefore, there is a need for a controller that can efficiently process data to accommodate high throughput rates.


SUMMARY OF THE INVENTION

A method for transferring frames from a storage device to a host system via a controller is provided. The method includes, transferring frames from a transport module to a link module; and sending an acknowledgment to the transport module, wherein the link module sends the acknowledgement to the transport module and it appears to the transport module as if the host system sent the acknowledgement.


The transport module vacates an entry for a frame after it receives the acknowledgement from the link module. Also, the transport module waits for an acknowledgement from the host system, after a last frame for a read command is transmitted to the host system.


In yet another aspect of the present invention, a method for tracking frames in a controller used for facilitating frame transfer between a host system and a storage device is provided. The method includes: creating a status entry indicating that a new frame is being created; accumulating data flow information, while a connection to transfer the frame is being established by a link module; and updating frame status as frame build is completed, transferred, and acknowledged.


The method further includes: determining if a frame has been lost after transmission; and using a known good frame build point to process the frame if it was lost in transmission.


In yet another aspect of the present invention, a method is provided for processing frames in a transmit path of a controller that is used to facilitate frame transfer between a storage device and host system. The method includes, loading a received frame's context to a header array; building a frame and selecting a header array for processing the frame; and saving the context to a different header array if the frame processing is complex.


In yet another aspect of the present invention, a method for processing frames in a receive path of a controller used for facilitating frame transfer between a storage device and a host system is provided. The method includes: loading a context of a received frame into an header array; verifying received frame header information; and sending Transfer Ready or Response frames to the host system using a frame header context.


In yet another aspect of the present invention, a controller for transferring frames between a storage device and a host system is provided. The controller includes a header array in a transport module of the controller, wherein the header array includes plural layers and one of the layers is selected to process a frame.


This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof concerning the attached drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features and other features of the present invention will now be described with reference to the drawings of a preferred embodiment. In the drawings, the same components have the same reference numerals. The illustrated embodiment is intended to illustrate, but not to limit the invention. The drawings include the following Figures:



FIG. 1A shows an example of a storage drive system used with the adaptive aspects of the present invention;



FIG. 1B shows a block diagram of a SAS module used in a controller, according to one aspect of the present invention;



FIG. 1C shows a detailed block diagram of a SAS module, according to one aspect of the present invention;



FIG. 1D shows a SAS frame that is received/transmitted using the SAS module according to one aspect of the present invention;



FIG. 1E shows a block diagram of a transport module according to one aspect of the present invention;



FIG. 2 shows a flow diagram for processing a data transfer command in the transmit path, according to one aspect of the present invention;



FIG. 3 shows a flow diagram for a link module to acknowledge frame receipt, according to one aspect of the present invention;



FIG. 4 shows a flow diagram of process steps in the transmit path of a controller, according to one aspect of the present invention;



FIG. 5 shows a flow diagram of the receive process using a header array, according to one aspect of the present invention;



FIG. 6 shows a block diagram for selecting a header array, according to one aspect of the present invention; and


FIGS. 7(1)-7(2) (referred to as FIG. 7) shows header array contents, according to one aspect of the present invention;





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Controller Overview:


To facilitate an understanding of the preferred embodiment, the general architecture and operation of a controller will initially be described. The specific architecture and operation of the preferred embodiment will then be described with reference to the general architecture.



FIG. 1A shows an example of a storage drive system (with an optical disk or tape drive), included in (or coupled to) a computer system. The host computer (not shown) and the storage device 110 (also referred to as disk 110) communicate via a port using a disk formatter “DF” 104. In an alternate embodiment (not shown), the storage device 110 is an external storage device, which is connected to the host computer via a data bus. The data bus, for example, is a bus in accordance with a Small Computer System Interface (SCSI) specification. Those skilled in the art will appreciate that other communication buses known in the art can be used to transfer data between the drive and the host system.


As shown in FIG. 1A, the system includes controller 101, which is coupled to buffer memory 111 and microprocessor 100. Interface 109 serves to couple microprocessor bus 107 to microprocessor 100 and a micro-controller 102 and facilitates transfer of data, address, timing and control information. A read only memory (“ROM”) omitted from the drawing is used to store firmware code executed by microprocessor 100.


Controller 101 can be an integrated circuit (IC) that is comprised of various functional modules, which provide for the writing and reading of data stored on storage device 110. Buffer memory 111 is coupled to controller 101 via ports to facilitate transfer of data, timing and address information. Buffer memory 111 may be a double data rate synchronous dynamic random access memory (“DDR-SDRAM”) or synchronous dynamic random access memory (“SDRAM”), or any other type of memory.


Disk formatter 104 is connected to microprocessor bus 107 and to buffer controller 108. A direct memory access (“DMA”) DMA interface (not shown) is connected to microprocessor bus 107 and to a data and control port (not shown).


Buffer controller (also referred to as “BC”) 108 connects buffer memory 111, channel one (CH1) logic 105, and error correction code (“ECC”) module 106 to bus 107. Buffer controller 108 regulates data movement into and out of buffer memory 111.


CH1 logic 105 is functionally coupled to SAS module 103 that is described below in detail. CH1 Logic 105 interfaces between buffer memory 111 and SAS module 103. SAS module 103 interfaces with host interface 104A to transfer data to and from disk 110.


Data flow between a host and disk passes through buffer memory 111 via channel 0 (CH0)logic 106A. ECC module 106 generates ECC that is saved on disk 110 during a write operation and provides correction mask to BC 108 for disk 110 read operation.


The channels (CH0106A and CH1105 and Channel 2 (not shown) are granted arbitration turns when they are allowed access to buffer memory 111 in high speed burst write or read operations for a certain number of clocks. The channels use first-in-first out (“FIFO”) type memories to store data that is in transit. Firmware running on processor 100 can access the channels based on bandwidth and other requirements.


To read data from device 110, a host system sends a read command to controller 101, which stores the read commands in buffer memory 111. Microprocessor 100 then reads the command out of buffer memory 111 and initializes the various functional blocks of controller 101. Data is read from device 110 and is passed to buffer controller 108.


To write data, a host system sends a write command to disk controller 101, which is stored in buffer 111. Microprocessor 100 reads the command out of buffer 111 and sets up the appropriate registers. Data is transferred from the host and is first stored in buffer 111, before being written to disk 110. CRC (cyclic redundancy check code) values are calculated based on a logical block address (“LBA”) for the sector being written. Data is read out of buffer 111, appended with ECC code and written to disk 110.


Frame Structure:



FIG. 1D shows a SAS frame 129 that is received/transmitted using SAS module 103. Frame 129 includes a WWN value 129A, a start of frame (“SOF”) value 129G, a frame header 129B that includes a frame type field 129E, payload/data 129C, CRC value 129D and end of frame (“EOF”) 129F. The SAS specification addresses all devices by a unique World Wide Name (“WWN”) address.


Also, a frame may be an interlock or non-interlocked, specified by field 129E (part of frame header 129B). For an interlock frame, acknowledgement from a host is required for further processing, after the frame is sent to the host. Non-interlock frames are passed through to a host without host acknowledgement (up to 256 frames per the SAS standard).


SAS Module 103:



FIG. 1B shows a top level block diagram for SAS module 103 used in controller 101. SAS module 103 includes a physical (“PHY”) module 112, a link module 113 and a transport module (“TRN”) 114 described below in detail. A micro-controller 115 is used to co-ordinate operations between the various modules. A SAS interface 116 is also provided to the PHY module 112 for interfacing with a host and interface 117 is used to initialize the PHY module 112.



FIG. 1C shows a detailed block diagram of SAS module 103 with various sub-modules. Incoming data 112C is received from a host system, while outgoing data 112D is sent to a host system or another device/component.


PHY Module 112:


PHY module 112 includes a serial/deserializer (“SERDES”) 112A that serializes encoded data for transmission 112D, and de-serializes received data 112C. SERDES 112A also recovers a clock signal from incoming data stream 112C and performs word alignment.


PHY control module 112B controls SERDES 112A and provides the functions required by the SATA standard.


Link Module 113:


Link module 113 opens and closes connections, exchanges identity frames, maintains ACK/NAK (i.e. acknowledged/not acknowledged) balance and provides credit control. As shown in FIG. 1C, link module 113 has a receive path 118 that receives incoming frames 112C and a transmit path 120 that assists in transmitting information 112D. Addresses 121 and 122 are used for received and transmitted data, respectively.


Receive path 118 includes a converter 118C for converting 10-bit data to 8-bit data, an elasticity buffer/primitive detect segment 118B that transfers data from a receive clock domain to a transmit block domain and decodes primitives. Descrambler module 118A unscrambles data and checks for cyclic redundancy check code (“CRC”).


Transmit path 120 includes a scrambler 120A that generates CRC and scrambles (encodes) outgoing data; and primitive mixer module 120B that generates primitives required by SAS protocol/standard and multiplexes the primitives with the outgoing data. Converter 120C converts 8-bit data to 10-bit format.


Link module 113 uses plural state machines 119 to achieve the various functions of its sub-components. State machines 119 includes a receive state machine for processing receive frames, a transmit state machine for processing transmit frames, a connection state machine for performing various connection related functions and an initialization state machine that becomes active after an initialization request or reset.


Transport module 114:


Transport module 114 interfaces with CH1105 and link module 113. In transmit mode, TRN module 114 receives data from CH1105, loads the data (with fibre channel header (FCP) 127) in FIFO 125 and sends data to Link module 113 encapsulated with a header (129B) and a CRC value (129D). In receive mode, TRN module 114 receives data from link module 113 (in FIFO 124), and re-packages data (extracts header 126 and 128) before being sent to CH1105. CH1105 then writes the data to buffer 111. State machine 123 is used to co-ordinate data transfer in the receive and transmit paths.



FIG. 1E shows a detailed block diagram of transport module 114. Transmit FIFO 125 operates at BCCLK 125B(BC 108 clock) on the input side and SASCLK 125A on the output side. FIFO 125 holds one or more frames with a header, payload and CRC value.


Transport module 114 includes another FIFO on the transmit side, the Fx FIFO 114C. Fx FIFO 114C includes a write pointer, which specifies the entry to use when a new frame is built by transport module 114. Fx FIFO 114C also includes an ACK/NAK pointer (“akptr”). When Link module 113 receives an ACK for a frame, the entry is removed from Fx FIFO 114C and the akptr is increased.


Fx FIFO 114C also includes a “lnkptr” that indicates a frame being sent to link module 113 at a given time. Fx FIFO 114C also includes a pointer for MP 100 to allow microprocessor 100 to inspect and modify the content of the Fx FIFO 114C.


Transport module 114 also include a multiplier 114A that is used for hardware assist when firmware initializes transport module 114 registers; and credit logic 114D (that provides available credit information to Link 113 for received data).


A header array 114B is used for processing data efficiently, as described below in detail, according to one aspect of the present invention.


Transmit module 114 can send interrupts and status 130 to MP 100 (or to MC 102/MC 115) on the receive side. Control and configuration information is shown as 133, while details regarding incoming data (for example, transfer count, burst length, data offset and frame size) is shown as 134.


On the transmit side, interrupts/status are shown as 131A, control/configuration as 131 and outgoing data attributes (for example, transfer count, burst length, data offset and frame size) is shown as 132.


Frame Processing:



FIG. 2 shows a flow diagram for processing a data transfer command in the transmit path, according to one aspect of the present invention. In step S200, the process starts and in step S201, a data transfer command is received from a host system via host interface 104A.


In step S202, a status entry is created in Fx FIFO 114C. The entry indicates that a new frame has been created.


In step S203, to reduce latency, WWN index value 129A is sent to link module 113. This allows link module 113 and PHY module 112 to initiate a connection, while the frame is being built.


In step S204, link module 113/PHY 112 initiates a connection and data flow information is accumulated simultaneously. This reduces latency for transmitting frames.


In step S205, when the frame is built, the status is updated in FIFO 114C. The same is performed when the frame is sent.


In step S206, after the frame is sent, the process (MC 115) determines if the frame is lost. This is based on whether the host system indicates that the frame has been received. If the frame is not lost, then in step S207, the entry is vacated for the next frame.


If the frame is lost, then the process starts again. However, frame processing does not have to begin from step S200, instead, the processing is resumed from a known point, since frame status is continuously updated from the time a frame is created to the time it is sent.


MC 115 can tag frames using various identifiers. For example, a frame may be tagged so that link module 113 discards the frame; a frame is tagged as an interlock/non-interlock frame; a frame may be tagged as an error frame; or the last frame is tagged as the “last frame” of a read command.


The foregoing process allows MC 115 to know who requested a frame, where in buffer 111 did the frame come from, how many blocks comprise the frame and all the information used to build the frame (for example, CRC residue, logical block address and offset from the beginning of the block). This information is used to process the frame if the frame is lost and perform diagnostics on a connection.


Process Flow for Link Module 113 Acknowledging Frame Receipt:



FIG. 3 shows a flow diagram of process steps where after a frame is sent by transport module 114 to link module 113, the link module 113 acknowledges frame receipt so that transport module 114 does not wait for host acknowledgement, until the last frame has been sent in a read command.


In step S300, link module 113 via PHY module 112 transfers frame to a host.


In step S301, link module 113 sends an ACK frame to transport module 114. Transport module 114 considers the ACK to be that from a host. Firmware can enable or disable the mode that allows link module 113 to send an ACK frame. If the link module 113 is not enabled to send an ACK frame, then transport module 114 waits for the host to acknowledge frame receipt (for interlock frames). Thereafter, in step S302, the entry for the transmitted frame in FIFO 125 is vacated.


In step S303, data flow information is stored in a register (not shown). Thereafter, in step S304, data is released to BC 108 and transport module 114 waits for an ACK/NAK balance condition, after the last frame has been transmitted.



FIG. 4 shows a flow diagram of process steps in the transmit path of controller 101. In step S400, receive commands are received from a host. The command includes a context and data. In step S401, the context is loaded in header array 114B (as shown in FIG. 6) by MC 115 or MP 100. In one aspect the header array 114B includes one array element each for receive and transmit processes and two for either context switch or spares. Since initializing a header array can take a significant amount of time, extra (spare) arrays are provided allow the microprocessor 100 firmware to overlap initializing the header array for the next processes while transmission and receiving frames for the current processes.


In step S402, the frame is built and a header row is selected from the header array 114B. This is performed based on command/signal/bit set in register 601.


In step S403, the frame is processed as discussed below with respect to steps S406 and S405. For a non-complex case, for example, where there is no interrupt involved, a response is sent in step S406 using the selected row from header array 114B. For a complex case, in step S404, the context is saved in another header array 114B row and then the frame is sent. Thereafter, after the frame is processed in step S405, the process reverts back to the previous header row (step S406).


It is noteworthy that header array 114B allows firmware to interrupt what is being transmitted at a given time, save the context into the array in a single access, select a new context, process the new context and then revert back to the old context. Header array 114B architecture allows generation of different types of frames using the same array element.



FIG. 5 shows a flow diagram of the receive process using header array 114B, according to one aspect of the present invention. In step S500, write data command is received from host. In step S501, MC 115 or MP 100 loads the context into header array 114B. In step S502, frame header is verified. If the frame header cannot be verified, then an error flag is set in step S503.


If the frame header can be verified, then in step S504, data is saved in buffer 111. Thereafter, in step S505, a XFER-RDY signal is sent to the host.


It is noteworthy that a receive operation is split into different bursts paced by the recipient. Header array 114B can save a current context of a receive operation at the beginning of each burst to allow for retries, in case of errors.


It is noteworthy that the transmit and receive processes may use the same or different array elements. While one or two array elements are actively processed at a given time, MP 100 may process other elements for future processing and thus improve overall controller 101 performance.


Header Array 114B:


As shown in FIG. 6, header array 114B has plural rows/layers and one row is selected by signal/command/bit generated from header select register 601. Array addresses are shown as 607.


Various commands/signal/bit (used interchangeably) values, 602-606, are used for processing both receive and transmit operations. For example, when all the data for the write command is received by controller 101, a “Good Rx” response frame is selected by 604. “XFER_RDY” frame is selected by 605, when all data for a burst has been written in buffer 111. A frame header is selected by 602 and a “Good Tx” response is selected by 603 for data frame transmission. Context header array (row) is selected by bit 606 after a frame is received and the context is checked, based on the selected array.


Header array mask 608 is used for determining which information in a header participates in context save and retrieve operations.



FIG. 7 shows header array 114B contents including control context, header context, transfer context, flow control context and input/output context.


The header array architecture of FIG. 6 allows controller 101 to efficiently manage frame headers both on transmit and receive paths. Headers are built ahead in an array, plural headers may be generated for a single connection and incoming headers are checked using an expected header array 114B.


Although the present invention has been described with reference to specific embodiments, these embodiments are illustrative only and not limiting. Many other applications and embodiments of the present invention will be apparent in light of this disclosure.

Claims
  • 1. A controller for a storage device associated with a host, the controller comprising: a host interface to i) receive read commands, write commands, and first data to be stored on the storage device from the host, and ii) transmit second data stored on the storage device to the host;and a Serial Attached Small Computer Interface (SAS) module located on the controller, wherein the SAS module transfers the first data and the second data between the storage device and the host interface, and wherein the SAS module includes a transport module toi) receive the second data from the storage device,ii) build a data frame based on the second data, wherein the data frame corresponds to an interlock frame requiring an acknowledgement from the host prior to processing subsequent data frames, and a link module to receive the data frame from the transport module, transfer the data frame to the host interface to be transmitted from the controller to the host, and in response to transferring the data frame to the host interface, selectively operate in(i) a first mode when the first mode is enabled by firmware and(ii) a second mode when the first mode is disabled by the firmware, wherein when the first mode is enabled, the link module operates in the first mode toi) generate an acknowledgment frame acknowledging receipt of the data frame by the host independently of either the host or another device external to the controller generating and sending the acknowledgment frame, andii) transmit the acknowledgement frame to the transport module, and when the first mode is disabled, the link module operates in the second mode to wait for the host to generate and transmit the acknowledgment frame to be received by the SAS module via the host interface.
  • 2. The controller of claim 1, wherein the transport module notifies the link module that the data frame is being built prior to completing the building of the data frame.
  • 3. The controller of claim 2, wherein the link module initiates a connection with a physical layer module in response to being notified that the data frame is being built.
  • 4. The controller of claim 1, wherein the transport module stores a status entry for the data frame, and wherein the status entry is configurable to indicate i) that the data frame is being built, ii) that the building of the data frame is complete, and iii) that the data frame was transmitted from the SAS module.
  • 5. The controller of claim 4, wherein the SAS module determines whether the data frame is lost, and i) if the data frame is lost, continues at least one of building and transmitting the frame based on the status entry, and ii) if the data frame is not lost, clears the status entry.
  • 6. The controller of claim 5, wherein determining whether the data frame is lost includes determining whether the data frame is lost based on whether the acknowledgement frame was received.
  • 7. The controller of claim 1, wherein the SAS module tags the data frame with an identifier that indicates whether transmission of the data frame requires an acknowledgement frame from the host.
  • 8. The controller of claim 1, wherein the data frame includes a field that indicates whether the data frame requires an acknowledgement frame from the host.
  • 9. A method of operating a controller for a storage device associated with a host, the method comprising: using a host interface, receiving read commands, write commands, and first data to be stored on the storage device from the host, and transmitting second data stored on the storage device to the host; andusing a Serial Attached Small Computer Interface (SAS) module located on the controller, transferring the first data and the second data between the storage device and the host interface, receiving the second data from the storage device, building a data frame based on the second data, wherein the data frame corresponds to an interlock frame requiring an acknowledgement from the host prior to processing subsequent data frames, transferring the data frame to the host interface to be transmitted from the controller to the host, and in response to transferring the data frame to the host interface, selectively operating in(i) a first mode when the first mode is enabled by firmware and(ii) a second mode when the first mode is disabled by the firmware, wherein when the first mode is enabled, operating in the first mode toi) generate an acknowledgment frame acknowledging receipt of the data frame by the host independently of either the host or another device external to the controller generating and sending the acknowledgment frame, andii) transmit the acknowledgement frame to the transport module, and when the first mode is disabled, operate in the second mode to wait for the host to generate and transmit the acknowledgment frame to be received by the SAS module via the host interface.
  • 10. The method of claim 9, further comprising generating a notification that the data frame is being built prior to completing the building of the data frame.
  • 11. The method of claim 10, further comprising initiating a connection with a physical layer module in response to the notification that the data frame is being built.
  • 12. The method of claim 9, further comprising storing a status entry for the data frame, wherein the status entry is configurable to indicate i) that the data frame is being built, ii) that the building of the data frame is complete, and iii) that the data frame was transmitted from the SAS module.
  • 13. The method of claim 12, further comprising: determining whether the data frame is lost;if the data frame is lost, continuing at least one of building and transmitting the frame based on the status entry; andif the data frame is not lost, clearing the status entry.
  • 14. The method of claim 13, wherein determining whether the data frame is lost includes determining whether the data frame is lost based on whether the acknowledgement frame was received.
  • 15. The method of claim 9, further comprising tagging the data frame with an identifier that indicates whether transmission of the data frame requires an acknowledgement frame from the host.
  • 16. The method of claim 9, wherein the data frame includes a field that indicates whether the data frame requires an acknowledgement frame from the host.
US Referenced Citations (197)
Number Name Date Kind
3800281 Devore et al. Mar 1974 A
3988716 Fletcher et al. Oct 1976 A
4001883 Strout et al. Jan 1977 A
4016368 Apple, Jr. Apr 1977 A
4050097 Miu et al. Sep 1977 A
4080649 Calle et al. Mar 1978 A
4156867 Bench et al. May 1979 A
4225960 Masters Sep 1980 A
4275457 Leighou et al. Jun 1981 A
4390969 Hayes Jun 1983 A
4451898 Palermo et al. May 1984 A
4486750 Aoki Dec 1984 A
4500926 Yoshimaru et al. Feb 1985 A
4587609 Boudreau et al. May 1986 A
4603382 Cole Jul 1986 A
4625321 Pechar et al. Nov 1986 A
4667286 Young et al. May 1987 A
4777635 Glover Oct 1988 A
4805046 Kuroki et al. Feb 1989 A
4807116 Katzman et al. Feb 1989 A
4807253 Hagenauer et al. Feb 1989 A
4809091 Miyazawa et al. Feb 1989 A
4811282 Masina Mar 1989 A
4812769 Agoston Mar 1989 A
4860333 Bitzinger et al. Aug 1989 A
4866606 Kopetz Sep 1989 A
4881232 Sako et al. Nov 1989 A
4920535 Watanabe et al. Apr 1990 A
4949342 Shimbo et al. Aug 1990 A
4970418 Masterson Nov 1990 A
4972417 Sako et al. Nov 1990 A
4975915 Sako et al. Dec 1990 A
4989190 Kuroe et al. Jan 1991 A
5014186 Chisholm May 1991 A
5023612 Liu Jun 1991 A
5027357 Yu et al. Jun 1991 A
5050013 Holsinger Sep 1991 A
5051998 Murai et al. Sep 1991 A
5068755 Hamilton et al. Nov 1991 A
5068857 Yoshida Nov 1991 A
5072420 Conley et al. Dec 1991 A
5088093 Storch et al. Feb 1992 A
5109500 Iseki et al. Apr 1992 A
5117442 Hall May 1992 A
5127098 Rosenthal et al. Jun 1992 A
5133062 Joshi et al. Jul 1992 A
5136592 Weng Aug 1992 A
5146585 Smith, III Sep 1992 A
5157669 Yu et al. Oct 1992 A
5162954 Miller et al. Nov 1992 A
5193197 Thacker Mar 1993 A
5204859 Paesler et al. Apr 1993 A
5218564 Haines et al. Jun 1993 A
5220569 Hartness Jun 1993 A
5237593 Fisher et al. Aug 1993 A
5243471 Shinn Sep 1993 A
5249271 Hopkinson Sep 1993 A
5257143 Zangenehpour Oct 1993 A
5261081 White et al. Nov 1993 A
5271018 Chan Dec 1993 A
5274509 Buch Dec 1993 A
5276564 Hessing et al. Jan 1994 A
5276662 Shaver, Jr. et al. Jan 1994 A
5276807 Kodama et al. Jan 1994 A
5276813 Elliott et al. Jan 1994 A
5280488 Glover et al. Jan 1994 A
5285327 Hetzler Feb 1994 A
5285451 Henson et al. Feb 1994 A
5301333 Lee Apr 1994 A
5307216 Cook et al. Apr 1994 A
5315708 Eidler et al. May 1994 A
5339443 Lockwood Aug 1994 A
5361266 Kodama et al. Nov 1994 A
5361267 Godiwala et al. Nov 1994 A
5408644 Schneider et al. Apr 1995 A
5420984 Good et al. May 1995 A
5428627 Gupta Jun 1995 A
5440751 Santeler et al. Aug 1995 A
5465343 Henson et al. Nov 1995 A
5487170 Bass et al. Jan 1996 A
5488688 Gonzales et al. Jan 1996 A
5491701 Zook Feb 1996 A
5500848 Best et al. Mar 1996 A
5506989 Boldt et al. Apr 1996 A
5507005 Kojima et al. Apr 1996 A
5519837 Tran May 1996 A
5523903 Hetzler et al. Jun 1996 A
5544180 Gupta Aug 1996 A
5544346 Amini Aug 1996 A
5546545 Rich Aug 1996 A
5546548 Chen et al. Aug 1996 A
5563896 Nakaguchi Oct 1996 A
5572148 Lytle et al. Nov 1996 A
5574867 Khaira Nov 1996 A
5581715 Verinsky et al. Dec 1996 A
5583999 Sato et al. Dec 1996 A
5592404 Zook Jan 1997 A
5600662 Zook et al. Feb 1997 A
5602857 Zook et al. Feb 1997 A
5615190 Best et al. Mar 1997 A
5623672 Popat Apr 1997 A
5626949 Blauer et al. May 1997 A
5627695 Prins et al. May 1997 A
5640602 Takase Jun 1997 A
5649230 Lentz Jul 1997 A
5664121 Cerauskis Sep 1997 A
5689656 Baden et al. Nov 1997 A
5691994 Acosta et al. Nov 1997 A
5692135 Alvarez, II et al. Nov 1997 A
5692165 Jeddeloh et al. Nov 1997 A
5719516 Sharpe-Geisler Feb 1998 A
5729718 Au Mar 1998 A
5740466 Geldman Apr 1998 A
5745793 Atsatt et al. Apr 1998 A
5754759 Clarke et al. May 1998 A
5758188 Appelbaum et al. May 1998 A
5784569 Miller et al. Jul 1998 A
5794073 Ramakrishnan et al. Aug 1998 A
5801998 Choi Sep 1998 A
5818886 Castle Oct 1998 A
5822142 Hicken Oct 1998 A
5831922 Choi Nov 1998 A
5832310 Morrissey et al. Nov 1998 A
5835930 Dobbek Nov 1998 A
5838895 Kim et al. Nov 1998 A
5841722 Willenz Nov 1998 A
5844844 Bauer et al. Dec 1998 A
5850422 Chen Dec 1998 A
5854918 Baxter Dec 1998 A
5890207 Sne et al. Mar 1999 A
5890210 Ishii et al. Mar 1999 A
5907717 Ellis May 1999 A
5912906 Wu et al. Jun 1999 A
5925135 Trieu et al. Jul 1999 A
5937435 Dobbek et al. Aug 1999 A
5950223 Chiang et al. Sep 1999 A
5968180 Baco Oct 1999 A
5983293 Murakami Nov 1999 A
5991911 Zook Nov 1999 A
6029226 Ellis et al. Feb 2000 A
6029250 Keeth Feb 2000 A
6041417 Hammond et al. Mar 2000 A
6065053 Nouri et al. May 2000 A
6067206 Hull et al. May 2000 A
6070200 Gates et al. May 2000 A
6078447 Sim Jun 2000 A
6081849 Born et al. Jun 2000 A
6092231 Sze Jul 2000 A
6094320 Ahn Jul 2000 A
6124994 Malone, Sr. Sep 2000 A
6134063 Weston-Lewis et al. Oct 2000 A
6157984 Fisher Dec 2000 A
6178486 Gill et al. Jan 2001 B1
6192499 Yang Feb 2001 B1
6201555 Kamentser et al. Mar 2001 B1
6201655 Watanabe et al. Mar 2001 B1
6223303 Billings et al. Apr 2001 B1
6256685 Lott Jul 2001 B1
6279089 Schibilla et al. Aug 2001 B1
6297926 Ahn Oct 2001 B1
6330626 Dennin et al. Dec 2001 B1
6381659 Proch et al. Apr 2002 B2
6401149 Dennin et al. Jun 2002 B1
6418468 Ahlstrom et al. Jul 2002 B1
6418488 Chilton et al. Jul 2002 B1
6470461 Pinvidic et al. Oct 2002 B1
6487531 Tosaya et al. Nov 2002 B1
6487631 Dickinson et al. Nov 2002 B2
6490635 Holmes Dec 2002 B1
6502189 Westby Dec 2002 B1
6513085 Gugel et al. Jan 2003 B1
6530000 Krantz et al. Mar 2003 B1
6574676 Megiddo Jun 2003 B1
6584584 Smith Jun 2003 B1
6662334 Stenfort Dec 2003 B1
6735658 Thornton May 2004 B1
6826650 Krantz et al. Nov 2004 B1
6965725 Ichikawa et al. Nov 2005 B1
6965956 Herz et al. Nov 2005 B1
7035952 Elliott et al. Apr 2006 B2
7200790 Sharma et al. Apr 2007 B2
20010044873 Wilson et al. Nov 2001 A1
20020071485 Caglar et al. Jun 2002 A1
20020188769 Swidler et al. Dec 2002 A1
20030012223 Chappell et al. Jan 2003 A1
20030037225 Deng et al. Feb 2003 A1
20030126322 Micalizzi et al. Jul 2003 A1
20030167373 Winters et al. Sep 2003 A1
20040054813 Boucher et al. Mar 2004 A1
20040111660 Kim et al. Jun 2004 A1
20040139365 Hosoya Jul 2004 A1
20040223506 Sato Nov 2004 A1
20040252672 Nemazie Dec 2004 A1
20050027900 Pettey Feb 2005 A1
20050102411 Haydock May 2005 A1
20050235072 Smith et al. Oct 2005 A1
20060031605 Kao et al. Feb 2006 A1
Foreign Referenced Citations (10)
Number Date Country
0528273 Feb 1993 EP
0622726 Nov 1994 EP
0718827 Jun 1996 EP
2285166 Jun 1995 GB
63 075927 Apr 1988 JP
63-292462 Nov 1988 JP
01-315071 Dec 1989 JP
03183067 Aug 1991 JP
9814861 Apr 1998 WO
WO 0067107 Nov 2000 WO
Non-Patent Literature Citations (11)
Entry
International Search Report from the International Searching Authority dated Apr. 19, 2006 for Application No. PCT/US2005/024774; 5 pages.
Written Opinion from the International Searching Authority dated Apr. 19, 2006 for Application No. PCT/US2005/024774; 6 pages.
International Search Report from the International Searching Authority dated Apr. 13, 2006 for Application No. PCT/US2005/024910; 5 pages.
Written Opinion from the International Searching Authority dated Apr. 13, 2006 for Application No. PCT/US2005/024910; 6 pages.
PCT International Search Report, Doc. No. PCT/US00/15084, Dated Nov. 15, 2000, 2 Pages.
Blathut R. Digital Transmission of Information (Dec. 4, 1990), pp. 429-430.
Hwang, Kai and Briggs, Faye A., “Computer Architecture and Parallel Processing” pp. 156-164.
Zeidman, Bob, “Interleaving DRAMS for faster access”, System Design ASIC & EDA, pp. 24-34 (Nov. 1993).
P.M. Bland et. al. Shared Storage Bus Circuitry, IBM Technical Disclosure Bulletin, vol. 25, No. 4, Sep. 1982, pp. 2223-2224.
PCT search report for PCT/US00/07780 mailed Aug. 2, 2000, 4 Pages.
PCT Search Report for PCT/US01/22404, mailed Jan. 29, 2003, 4 Pages.
Related Publications (1)
Number Date Country
20060015774 A1 Jan 2006 US