The present disclosure relates to hard disk drives, and more particularly to increasing buffer memory of an HDD system on chip (SOC) and to improved enterprise systems including HDD SOCs.
Host devices such as computers, laptops, personal video recorders (PVRs), MP3 players, game consoles, servers, set-top boxes, digital cameras, and/or other electronic devices often need to store a large amount of data. Storage devices such as hard disk drives (HDD) may be used to meet these storage requirements.
Referring now to
A read/write device 20 is located near a distal end of the read/write arm 18. The read/write device 20 includes a write element such as an inductor that generates a magnetic field. The read/write device 20 also includes a read element (such as a magneto-resistive (MR) element) that senses the magnetic field on the platter 14. A preamp circuit 22 amplifies analog read/write signals.
When reading data, the preamp circuit 22 amplifies low level signals from the read element and outputs the amplified signal to a read/write channel device 24. When writing data, a write current is generated which flows through the write element of the read/write device 20. The write current is switched to produce a magnetic field having a positive or negative polarity. The positive or negative polarity is stored by the hard drive platter 14 and is used to represent data.
The HDD SOC 12 typically includes a buffer 32 that stores data that is associated with the control of the hard disk drive and/or buffers data to allow data to be collected and transmitted as larger data blocks to improve efficiency. The buffer 32 may employ DRAM, SDRAM or other types of low latency memory. The HDD SOC 12 further includes a processor 34 that performs processing that is related to the operation of the HDD 10.
The HDD SOC 12 further includes a hard disk controller (HDC) 36 that communicates with a host device via an input/output (I/O) interface 38. The HDC 36 also communicates with a spindle/voice coil motor (VCM) driver 40 and/or the read/write channel device 24. The I/O interface 38 can be a serial or parallel interface, such as an Integrated Drive Electronics (IDE), Advanced Technology Attachment (ATA), or serial ATA (SATA) interface. The spindle/VCM driver 40 controls the spindle motor 16, which rotates the platter 14. The spindle/VCM driver 40 also generates control signals that position the read/write arm 18, for example using a voice coil actuator, a stepper motor or any other suitable actuator. The I/O interface 38 communicates with an I/O interface 44 that is associated with a host device 46.
Referring now to
One or more I/O devices such as a keyboard 73 and a pointing device 74 (such as a mouse and/or other suitable device) communicate with the interface 68. The computer architecture 64 may also include a display 76, an audio output device 77 such as audio speakers and/or other input/output devices that are generally identified at 78.
In use, the HDD is operated independently from the host device. The hard disk drive handles buffering of data locally to improve performance. This approach requires the hard disk drive to include low latency RAM such as DRAM, which increases the cost of the hard disk drive.
Referring now to
Referring now to
Because of the different number of processors and the different output side interfaces that are used, manufacturers have designed and manufactured two different HDD SOC architectures for enterprise and desktop applications. In particular, the desktop HDD SOC 200 includes a single processor while the enterprise HDD SOC 230 includes two processors. In addition, the desktop HDD SOC 200 typically employs an ATA and/or SATA interface while the enterprise server typically employs an SAS and/or FC interface. The separate architectures increase the design inventory and die costs of both devices.
A circuit is provided and includes a first memory and a processor. The processor is configured to receive data from a host device and transfer the data from the circuit to a storage drive. The processor is further configured to receive the data back from the storage drive (i) when a second memory in the storage drive does not have available space for the data, and (ii) prior to the data being transferred from the second memory to a third memory in the storage drive. The processor is also configured to: store the data received from the storage drive in the first memory or transfer the data received from the storage drive back to the host device; and based on a request received from the storage drive, transfer the data from the first memory or the host device back to the storage drive. The request indicates that space is available in the second memory for the data.
In other features, a method is provided and includes receiving data from a host device at a circuit and transferring the data from the circuit to a storage drive. The data is received back from the storage drive (i) when a first memory in the storage drive does not have available space for the data, and (ii) prior to the data being transferred from the first memory to a second memory in the storage drive. The data received from the storage drive is stored in a third memory or transferring the data received from the storage drive back to the host device. The circuit includes the third memory. Based on a request received from the storage drive, the data is transferred from the third memory or the host device back to the storage drive. The request indicates that space is available in the second memory for the data.
A circuit for a storage device that communicates with a host device includes a first high speed interface. A storage controller communicates with the high speed interface. A buffer communicates with the storage controller. The storage device generates storage buffer data during operation and the storage controller is adapted to selectively store the storage buffer data in at least one of the buffer and/or in the host device via the high speed interface.
Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:
For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. While SOCs are disclosed herein, skilled artisans will appreciate that the SOCs may be implemented as multi-chip modules.
Referring now to
The HDD SOC 302 further includes a hard disk controller (HDC) 336 that communicates with a host device via a high speed input/output (I/O) interface 338. The HDC 336 also communicates with a spindle/voice coil motor (VCM) driver 340 and/or the read/write channel device 324. The high speed I/O interface 338 can be a serial ATA (SATA) interface. The spindle/VCM driver 340 controls the spindle motor 16, which rotates the platter 14. The spindle/VCM driver 340 also generates control signals that position the read/write arm 18, for example using a voice coil actuator, a stepper motor or any other suitable actuator. The high speed I/O interface 338 communicates with a high speed I/O interface 344 that is associated with a host device 346.
The host device 346 includes a processor 348 and volatile memory 350. The host device 346 and the HDD SOC 302 allocate part of the volatile memory 350 for a host disk drive buffer (HDDB) 352. The HDD SOC 302 also includes the buffer 332. When additional RAM is needed for buffering, the HDD SOC 302 transmits/receives data over the high speed interface 338 to/from the HDDB 352 located in the volatile memory 350 of the host device 346. For example, nominal speeds of 3 Gb/s and higher can be obtained using a SATA interface. As can be appreciated, the ability to use the buffer 332 on the HDD SOC 302 as well as HDDB 352 of the host device 346 significantly increases the flexibility of the HDD SOC 302. Furthermore, by also including the buffer 332 on the HDD SOC 302, the HDD SOC 302 can also be used in applications that do not enable the HDDB 352.
In one implementation, the host device 346 includes an operating system that allows a user to allocate a variable amount of memory for the HDDB 352 from the volatile memory 350 of the host device 346. In another implementation, the volatile memory 350 is allocated automatically and/or a fixed amount of memory is available for the HDDB 352.
Referring now to
If step 356 is false, control determines whether there is a request to retrieve buffer data stored in the HDD buffer data in step 366. If false, control returns to step 354. If step 366 is true, control determines whether the buffer data is stored in the host HDDB 352 in step 370. If step 370 is false, control retrieves buffer data in the HDD buffer 332 of the HDD SOC 302 in step 376 and control returns to step 356. If step 370 is true, control retrieves HDD buffer data over the high speed interface 338 and 344 from the host HDDB 352 in step 374.
As can be appreciated, the HDD SOC 302 provides flexibility to allow use in host device applications that use the SATA interface and host memory for HDD buffering as well as applications that do not.
A system according to the present disclosure includes an HDD SOC and a bridge chip that can be used for enterprise applications. The HDD SOC can also be used for desktop applications. Referring now to
In
In some implementations, a faster processor can be used for enterprise applications and premium desktop applications while lower speed processors can be used for desktop applications and low cost enterprise applications. The ability to use the same SOC for desktop and enterprise applications allows the benefits of additional volume that is associated with desktop applications to be shared by the generally lower volumes that are associated with enterprise applications. Furthermore, since the same SOCs can be used for both, only one SOC needs to be stored in inventory for both applications.
Referring now to
Referring now to
Some host devices cannot currently handle host-based buffer memory for the HDD SOC. In other words, there will be a transition period between an old business model and a new business model. In the old business model, the host device does not have drivers that support host-based buffering and the HDD SOC and/or MCM have sufficient buffer memory to support HDD operations. In the new business model, the HDD SOC and/or MCM have very small FIFO memory and the host has drivers that support host-based buffering. Implementations of the present disclosure can make the transition between the old and new business models.
Referring now to
Referring now to
Referring now to
One benefit of this approach is the ability to eliminate external pins on the HDD SOC 650 for memory expansion. Therefore smaller dies can be used and fabrication costs are reduced since pads are expensive to fabricate (particularly for CMOS≦90 nm). Pads may also require electrostatic discharge protection (ESD), which also increases fabrication and design costs.
Referring now to
As can be appreciated, the HDD SOCs 450, 460 and 302 can be packaged as multi-chip modules if desired. While implementations of the present disclosure have been described in conjunction with magnetic storage systems, skilled artisans will appreciate that the implementations disclosed herein may also be used in conjunction with optical and/or other data read only and/or read/write systems. Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the present disclosure can be implemented in a variety of forms. Therefore, while the implementations have been described in connection with particular examples thereof, the true scope of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, the specification and the following claims.
This present disclosure is a continuation of U.S. application Ser. No. 13/154,356, filed Jun. 16, 2011 (now U.S. Pat. No. 8,333,555) which is a continuation of U.S. application Ser. No. 10/926,486, filed Aug. 18, 2004 (now U.S. Pat. No. 7,958,292), which claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 60/582,259, filed on Jun. 23, 2004. The disclosures of the applications referenced above are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6256684 | Klein | Jul 2001 | B1 |
6988151 | Tsuruta | Jan 2006 | B2 |
7200074 | Kano et al. | Apr 2007 | B2 |
7234005 | Yoshitake | Jun 2007 | B2 |
7373456 | Yamazaki et al. | May 2008 | B2 |
7453774 | Kano et al. | Nov 2008 | B2 |
7461203 | Suzuki et al. | Dec 2008 | B2 |
20030023815 | Yoneyama et al. | Jan 2003 | A1 |
20050005063 | Liu et al. | Jan 2005 | A1 |
20050108452 | Loffink | May 2005 | A1 |
20050120173 | Minowa | Jun 2005 | A1 |
20050177681 | Fujimoto et al. | Aug 2005 | A1 |
20050182874 | Herz et al. | Aug 2005 | A1 |
20060095813 | Yagisawa et al. | May 2006 | A1 |
20080040543 | Yamazaki et al. | Feb 2008 | A1 |
20080172528 | Yagisawa et al. | Jul 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20130097344 A1 | Apr 2013 | US |
Number | Date | Country | |
---|---|---|---|
60582259 | Jun 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13154356 | Jun 2011 | US |
Child | 13710814 | US | |
Parent | 10926486 | Aug 2004 | US |
Child | 13154356 | US |