The present invention is generally related to data storage systems, and more particularly to cross-platform data storage systems and RAID systems.
One problem facing the computer industry is lack of standardization in file subsystems. This problem is exacerbated by I/O addressing limitations in existing operating systems and the growing number of non-standard storage devices. A computer and software application can sometimes be modified to communicate with normally incompatible storage devices. However, in most cases such communication can only be achieved in a manner which adversely affects I/O throughput, and thus compromises performance. As a result, many computers in use today are “I/O bound.” More particularly, the processing capability of the computer is faster than the I/O response of the computer, and performance is thereby limited. A solution to the standardization problem would thus be of interest to both the computer industry and computer users.
In theory it would be possible to standardize operating systems, file subsystems, communications and other systems to resolve the problem. However, such a solution is hardly feasible for reasons of practicality. Computer users often exhibit strong allegiance to particular operating systems and architectures for reasons having to do with what the individual user requires from the computer and what the user is accustomed to working with. Further, those who design operating systems and associated computer and network architectures show little propensity toward cooperation and standardization with competitors. As a result, performance and ease of use suffer.
Disclosed is a universal storage management system which facilitates storage of data from a client computer. The storage management system functions as an interface between the client computer and at least one storage device and facilitates reading and writing of data by handling I/O operations. More particularly, I/O operation overhead in the client computer is reduced by translating I/O commands from the client computer to high level I/O commands which are employed by the storage management system to carry out I/O operations. The storage management system also enables interconnection of a normally incompatible storage device and client computer by translating I/O requests into an intermediate common format which is employed to generate commands which are compatible with the storage device receiving the request. Files, error messages and other information from the storage device are similarly translated and provided to the client computer.
The universal storage management system provides improved performance since client computers attached thereto are not burdened with directly controlling I/O operations. Software applications in the client computers generate I/O commands which are translated into high level commands which are sent by each client computer to the storage system, The storage management system controls I/O operations for each client computer based on the high level commands. Overall network throughput is improved since the client computers are relieved of the burden of processing slow I/O requests.
The universal storage management system can provide a variety of storage options which are normally unavailable to the client computer. The storage management system is preferably capable of controlling multiple types of storage devices such as disk drives, tape drives, CD-ROMS, magneto optical drives etc., and making those storage devices available to all of the client computers connected to the storage management system. Further, the storage management system can determine which particular storage media any given unit of data should be stored upon or retrieved from. Each client computer connected to the storage system thus gains data storage options because operating system limitations and restrictions on storage capacity are removed along with limitations associated with support of separate storage media. For example, the universal storage management system can read information from a CD-ROM and then pass that information on to a particular client computer, even though the operating system of that particular client computer has no support for or direct connection to the CD-ROM.
By providing a common interface between a plurality of client computers and a plurality of shared storage devices, network updating overhead is reduced. More particularly, the storage management system allows addition of drives to a computer network without reconfiguration of the individual client computers in the network. The storage management system thus saves installation time and removes limitations associated with various network operating systems to which the storage management system may be connected.
The universal storage management system reduces wasteful duplicative storage of data. Since the storage management system interfaces incompatible client computers and storage devices, the storage management system can share files across multiple heterogeneous platforms. Such file sharing can be employed to reduce the overall amount of data stored in a network. For example, a single copy of a given database can be shared by several incompatible computers, where multiple database copies were previously required. Thus, in addition to reducing total storage media requirements, data maintenance is facilitated.
The universal storage management system also provides improved protection of data. The storage management system isolates regular backups from user intervention, thereby addressing problems associated with forgetful or recalcitrant employees who fail to execute backups regularly.
These and other features of the present invention will become apparent in light of the following detailed description thereof, in which:
Referring to
Referring to
The file management system includes four modules: a file device driver 28, a transport driver 30a, 30b, a file system supervisor 32, and a device handler 34. The file device driver provides an interface between the client operating system 36 and the transport driver. More particularly, the file device driver resides in the client computer and redirects files to the transport driver. Interfacing functions performed by the file device driver include receiving data and commands from the client operating system, converting the data and commands to a universal storage management system file format, and adding record options, such as lock, read-only and script.
The transport driver 30a, 30b facilitates transfer of files and other information between the file device driver 28 and the file system supervisor 32. The transport driver is specifically configured for the link between the client computers and the storage management system. Some possible links include: SCSI-2, SCSI-3, fiber link, 802.3, 802.5, synchronous and a synchronous RS232, wireless RF, and wireless IR. The transport driver includes two components: a first component 30a which resides in the client computer and a second component 30b which resides in the storage management system computer. The first component receives data and commands from the file device driver. The second component relays data and commands to the file system supervisor. Files, data, commands and error messages can be relayed from the file system supervisor to the client computer operating system through the transport driver and file device driver.
The file system supervisor 32 operates to determine appropriate file-level applications for receipt of the files received from the client computer 10. The file system supervisor implements file specific routines on a common format file system. Calls made to the file system supervisor are high level, such as Open, Close, Read, Write, Lock, and Copy. The file system supervisor also determines where files should be stored, including determining on what type of storage media the files should be stored. The file system supervisor also breaks each file down into blocks and then passes those blocks to the device handler. Similarly, the file system supervisor can receive data from the device handler.
The device handler 34 provides an interface between the file system supervisor 32 and the SMA kernel 26 to provide storage device selection for each operation. A plurality of device handlers are employed to accommodate a plurality of storage devices. More particularly, each device handler is a driver which is used by the file system supervisor to control a particular storage device, and allow the file system supervisor to select the type of storage device to be used for a specific operation. The device handlers reside between the file system supervisor and the SMA kernel and the storage devices. The device handler thus isolates the file system supervisor from the storage devices such that the file system supervisor configuration is not dependent upon the configuration of the specific storage devices employed in the system.
The SMA Kernel 26 includes three independent modules: a front end interface 36, a scheduler 38, and a back-end interface 40. The front end interface is in communication with the client network and the scheduler. The scheduler is in communication with the back-end interface, device level applications, redundant array of independent disks (“RAID”) applications and the file management system. The back-end interface is in communication with various storage devices.
The front-end interface 36 handles communication between the client network 12 and resource scheduler 38, running on a storage management system based host controller which is connected to the client network and interfaced to the resource scheduler. A plurality of scripts are loaded at start up for on-demand execution of communication tasks. More particularly, if the client computer and storage management system both utilize the same operating system, the SMA kernel can be utilized to execute I/O commands from software applications in the client computer without first translating the I/O commands to high level commands as is done in the file management system.
The resource scheduler 38 supervises the flow of data through the universal storage management system. More particularly, the resource scheduler determines whether individual data units can be passed directly to the back-end interface 40 or whether the data unit must first be processed by one of the device level applications 42 or RAID applications 44. Block level data units are passed to the resource scheduler from either the front-end interface or the file management system.
The back-end interface 40 manages the storage devices 14. The storage devices are connected to the back-end interface by one or more SCSI type controllers through which the storage devices are connected to the storage management system computer. In order to control non-standard SCSI devices, the back-end interface includes pre-loaded scripts and may also include device specific drivers.
The storage management system employs high level commands to access the storage devices. The high level commands include array commands and volume commands, as follows:
The acreate command creates a new array by associating a group of storage devices in the same rank and assigning them a RAID level.
Syntax:
The aremove command removes the definition of a given array name and makes the associated storage devices available for the creation of other arrays.
The vopen command creates and/or opens a volume, and brings the specified volume on-line and readies that volume for reading and/or writing.
The vclose command closes a volume, brings the specified volume off-line, and removes all access restrictions imposed on the volume by the task that opened it.
The vread command reads a specified number of blocks into a given buffer from an open volume given by “vh”.
The vwrite command writes a specified number of blocks from the given buffer to an open volume given by “vh.”
The volcpy command copies “count” number of blocks from the location given by src_addr in src_vol to the logical block address given by dest_addr in dest_vol. Significantly, the command is executed without interaction with the client computer.
The modular design of the storage management system software provides some advantages. The SMA Kernel and file management system are independent program groups which do not have interdependency limitations. However, both program groups share a common application programming interface (API). Further, each internal software module (transport driver, file system supervisor, device handler, front-end interface, back-end interface and scheduler) interacts through a common protocol. Development of new modules or changes to an existing module thus do not require changes to other SMA modules, provided compliance with the protocol is maintained. Additionally, software applications in the client computer are isolated from the storage devices and their associated limitations. As such, the complexity of application development and integration is reduced, and reduced complexity allows faster development cycles. The architecture also offers high maintainability, which translates into simpler testing and quality assurance processes and the ability to implement projects in parallel results in a faster time to market.
The universal storage management system utilizes a standard file format which is selected based upon the cross platform client network for ease of file management system implementation. The file format may be based on UNIX, Microsoft-NT or other file formats. In order to facilitate operation and enhance performance, the storage management system may utilize the same file format and operating system utilized by the majority of client computers connected thereto, however this is not required. Regardless of the file format selected, the file management system includes at least one file device driver, at least one transport driver, a file system supervisor and a device handler to translate I/O commands from the client computer.
Referring to
Referring to
Preferably both horizontal and vertical power sharing are employed. In horizontal power sharing the power supplies 54 for each rack of storage devices includes one redundant power supply 58 which is utilized when a local power supply 54 in the associated rack fails. In vertical power sharing a redundant power supply 60 is shared between a plurality of racks 56 of local storage devices 54.
Referring now to
Referring now to
Referring to
File storage routines may be implemented to automatically select the type of media upon which to store data. Decision criteria for determining which type of media to store a file into can be determined from a data file with predetermined attributes. Thus, the file device driver can direct data to particular media in an intelligent manner. To further automate data storage, the storage management system includes routines for automatically selecting an appropriate RAID level for storage of each file. When the storage management system is used in conjunction with a computer network it is envisioned that a plurality of RAID storage options of different RAID levels will be provided. In order to provide efficient and reliable storage, software routines are employed to automatically select the appropriate RAID level for storage of each file based on file size. For example, in a system with RAID levels 3 and 5, large files might be assigned to RAID-3, while small files would be assigned to RAID-5. Alternatively, the RAID level may be determined based on block size, as predefined by the user.
Referring now to
An automatic storage device ejection method is illustrated in
Referring to
Automatic media selection is employed to facilitate defining volumes and arrays for use in the system. As a practical matter, it is preferable for a single volume or array to be made up of a single type of storage media. However, it is also preferable that the user not be required to memorize the location and type of each storage device in the pool, i.e., where each device is. The automatic media selection feature provides a record of each storage device in the pool, and when a volume or array is defined, the location of different types of storage devices are brought to the attention of the user. This and other features are preferably implemented with a graphic user interface (“GUI”) 108 (
Further media selection routines may be employed to provide reduced data access time. Users generally prefer to employ storage media with a fast access time for storage of files which are being created or edited. For example, it is much faster to work from a hard disk than from a CD-ROM drive. However, fast access storage media is usually more costly than slow access storage media. In order to accommodate both cost and ease of use considerations, the storage management system can automatically relocate files within the system based upon the frequency at which each file is accessed. Files which are frequently accessed are relocated to and maintained on fast access storage media. Files which are less frequently accessed are relocated to and maintained on slower storage media.
The method executed by the microprocessor controlled backplane is illustrated in
A READ cycle is illustrated in
A WRITE cycle is illustrated in
Other modifications and alternative embodiments of the present invention will become apparent to those skilled in the art in light of the information provided herein. Consequently, the invention is not to be viewed as limited to the specific embodiments disclosed herein.
This is a reissue application of U.S. Pat. No. 6,098,128 that issued on Aug. 1, 2000. A claim of priority is made to U.S. Provisional Patent Application Ser. No. 60/003,920 entitled UNIVERSAL STORAGE MANAGEMENT SYSTEM, filed Sep. 18 1995.
Number | Name | Date | Kind |
---|---|---|---|
3449718 | Woo | Jun 1969 | A |
3876978 | Bossen et al. | Apr 1975 | A |
4044328 | Herff | Aug 1977 | A |
4092732 | Ouchi | May 1978 | A |
4228496 | Katzman et al. | Oct 1980 | A |
4410942 | Milligan et al. | Oct 1983 | A |
4425615 | Swenson et al. | Jan 1984 | A |
4433388 | Oosterbaan | Feb 1984 | A |
4467421 | White | Aug 1984 | A |
4590559 | Baldwin et al. | May 1986 | A |
4636946 | Hartung et al. | Jan 1987 | A |
4644545 | Gershenson | Feb 1987 | A |
4656544 | Yamanouchi | Apr 1987 | A |
4722085 | Flora et al. | Jan 1988 | A |
4761785 | Clark et al. | Aug 1988 | A |
4800483 | Yamamoto et al. | Jan 1989 | A |
4817035 | Timsit | Mar 1989 | A |
4849929 | Timsit | Jul 1989 | A |
4849978 | Dishon et al. | Jul 1989 | A |
4903218 | Longo et al. | Feb 1990 | A |
4933936 | Rasmussen et al. | Jun 1990 | A |
4934823 | Okami | Jun 1990 | A |
4942579 | Goodlander et al. | Jul 1990 | A |
4993030 | Krakauer et al. | Feb 1991 | A |
4994963 | Rorden et al. | Feb 1991 | A |
5072378 | Manka | Dec 1991 | A |
5134619 | Henson et al. | Jul 1992 | A |
5148432 | Gordon et al. | Sep 1992 | A |
RE34100 | Hartness | Oct 1992 | E |
5163131 | Row et al. | Nov 1992 | A |
5197139 | Emma et al. | Mar 1993 | A |
5210824 | Putz et al. | May 1993 | A |
5220569 | Hartness | Jun 1993 | A |
5257367 | Goodlander et al. | Oct 1993 | A |
5274645 | Idleman et al. | Dec 1993 | A |
5285451 | Henson et al. | Feb 1994 | A |
5301297 | Menon et al. | Apr 1994 | A |
5305326 | Solomon et al. | Apr 1994 | A |
5313631 | Kao | May 1994 | A |
5315708 | Eidler et al. | May 1994 | A |
5317722 | Evans | May 1994 | A |
5329619 | Pagé et al. | Jul 1994 | A |
5333198 | Houlberg et al. | Jul 1994 | A |
5355453 | Row et al. | Oct 1994 | A |
5367647 | Coulson et al. | Nov 1994 | A |
5371743 | DeYesso et al. | Dec 1994 | A |
5392244 | Jacobson et al. | Feb 1995 | A |
5396339 | Stern et al. | Mar 1995 | A |
5398253 | Gordon | Mar 1995 | A |
5412661 | Hao et al. | May 1995 | A |
5416915 | Mattson et al. | May 1995 | A |
5418921 | Cortney et al. | May 1995 | A |
5420998 | Horning | May 1995 | A |
5423046 | Nunnelley et al. | Jun 1995 | A |
5428787 | Pineau | Jun 1995 | A |
5440716 | Schultz et al. | Aug 1995 | A |
5452444 | Solomon et al. | Sep 1995 | A |
5469453 | Glider et al. | Nov 1995 | A |
5483419 | Kaczeus, Sr. et al. | Jan 1996 | A |
5485579 | Hitz et al. | Jan 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5499337 | Gordon | Mar 1996 | A |
5513314 | Kandasamy et al. | Apr 1996 | A |
5519831 | Holzhammer | May 1996 | A |
5519844 | Stallmo | May 1996 | A |
5519853 | Moran et al. | May 1996 | A |
5524204 | Verdoorn, Jr. | Jun 1996 | A |
5530829 | Beardsley et al. | Jun 1996 | A |
5530845 | Hiatt et al. | Jun 1996 | A |
5535375 | Eshel et al. | Jul 1996 | A |
5537534 | Voigt et al. | Jul 1996 | A |
5537567 | Galbraith et al. | Jul 1996 | A |
5537585 | Blickenstaff et al. | Jul 1996 | A |
5537588 | Engelmann et al. | Jul 1996 | A |
5542064 | Tanaka et al. | Jul 1996 | A |
5542065 | Burkes et al. | Jul 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5546558 | Jacobson et al. | Aug 1996 | A |
5551002 | Rosich et al. | Aug 1996 | A |
5551003 | Mattson et al. | Aug 1996 | A |
5559764 | Chen et al. | Sep 1996 | A |
5564116 | Arai et al. | Oct 1996 | A |
5568628 | Satoh et al. | Oct 1996 | A |
5572659 | Iwasa et al. | Nov 1996 | A |
5572660 | Jones | Nov 1996 | A |
5574851 | Rathunde | Nov 1996 | A |
5579474 | Kakuta et al. | Nov 1996 | A |
5581726 | Tanaka | Dec 1996 | A |
5583876 | Kakuta | Dec 1996 | A |
5586250 | Carbonneau et al. | Dec 1996 | A |
5586291 | Lasker et al. | Dec 1996 | A |
5611069 | Matoba | Mar 1997 | A |
5615352 | Jacobson et al. | Mar 1997 | A |
5615353 | Lautzenheiser | Mar 1997 | A |
5617425 | Anderson | Apr 1997 | A |
5621882 | Kakuta | Apr 1997 | A |
5632027 | Martin et al. | May 1997 | A |
5634111 | Oeda et al. | May 1997 | A |
5642337 | Oskay et al. | Jun 1997 | A |
5649152 | Ohran et al. | Jul 1997 | A |
5650969 | Niijima et al. | Jul 1997 | A |
5657468 | Stallmo et al. | Aug 1997 | A |
5659704 | Burkes et al. | Aug 1997 | A |
5664187 | Burkes et al. | Sep 1997 | A |
5671439 | Klein et al. | Sep 1997 | A |
5673412 | Kamo et al. | Sep 1997 | A |
5678061 | Mourad | Oct 1997 | A |
5680574 | Yamamoto et al. | Oct 1997 | A |
5687390 | McMillan, Jr. | Nov 1997 | A |
5689678 | Stallmo et al. | Nov 1997 | A |
5696931 | Lum et al. | Dec 1997 | A |
5696934 | Jacobson et al. | Dec 1997 | A |
5699503 | Bolosky et al. | Dec 1997 | A |
5701516 | Cheng et al. | Dec 1997 | A |
5708828 | Coleman | Jan 1998 | A |
5720027 | Sarkozy et al. | Feb 1998 | A |
5732238 | Sarkozy | Mar 1998 | A |
5734812 | Yamamoto et al. | Mar 1998 | A |
5737189 | Kammersgard et al. | Apr 1998 | A |
5742762 | Scholl et al. | Apr 1998 | A |
5742792 | Yanai et al. | Apr 1998 | A |
5758074 | Marlin et al. | May 1998 | A |
5761402 | Kaneda et al. | Jun 1998 | A |
5774641 | Islam et al. | Jun 1998 | A |
5778430 | Ish et al. | Jul 1998 | A |
5787459 | Stallmo et al. | Jul 1998 | A |
5790774 | Sarkozy | Aug 1998 | A |
5794229 | French et al. | Aug 1998 | A |
5802366 | Row et al. | Sep 1998 | A |
5809224 | Schultz et al. | Sep 1998 | A |
5809285 | Hilland | Sep 1998 | A |
5812753 | Chiariotti | Sep 1998 | A |
5815648 | Giovannetti | Sep 1998 | A |
5819292 | Hitz et al. | Oct 1998 | A |
5857112 | Hashemi et al. | Jan 1999 | A |
5872906 | Morita et al. | Feb 1999 | A |
5875456 | Stallmo et al. | Feb 1999 | A |
5890204 | Ofer et al. | Mar 1999 | A |
5890214 | Espy et al. | Mar 1999 | A |
5890218 | Ogawa et al. | Mar 1999 | A |
5911150 | Peterson et al. | Jun 1999 | A |
5931918 | Row et al. | Aug 1999 | A |
5944789 | Tzelnic et al. | Aug 1999 | A |
5948110 | Hitz et al. | Sep 1999 | A |
5963962 | Hitz et al. | Oct 1999 | A |
5966510 | Carbonneau et al. | Oct 1999 | A |
6038570 | Hitz et al. | Mar 2000 | A |
6052797 | Ofek et al. | Apr 2000 | A |
6073222 | Ohran | Jun 2000 | A |
6076142 | Corrington et al. | Jun 2000 | A |
6148142 | Anderson | Nov 2000 | A |
Number | Date | Country |
---|---|---|
0201330 | Nov 1986 | EP |
0274817 | Jul 1988 | EP |
2086625 | May 1982 | GB |
56-074807 | Jun 1981 | JP |
57-185554 | Nov 1982 | JP |
59-085564 | May 1984 | JP |
60-254318 | Dec 1985 | JP |
61-62920 | Mar 1986 | JP |
63-278132 | Nov 1988 | JP |
02-148125 | Jun 1990 | JP |
Number | Date | Country | |
---|---|---|---|
60003920 | Sep 1995 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 08714846 | Sep 1996 | US |
Child | 10210592 | US |