Data storage devices having IP capable partitions

Information

  • Patent Grant
  • 8473578
  • Patent Number
    8,473,578
  • Date Filed
    Thursday, July 28, 2011
    12 years ago
  • Date Issued
    Tuesday, June 25, 2013
    10 years ago
Abstract
Apparatuses, methods, and systems related to IP-addressable partitions are disclosed. In some embodiments an IP address is used to uniquely identify a selected subset of partitions. Other embodiments may be described and claimed.
Description
FIELD OF THE INVENTION

The field of the invention is data storage devices.


BACKGROUND OF THE INVENTION

There is a trend within the field of electronics to physically (i.e. geographically) disaggregate functionality, and to rely instead on networked resources. Of special interest are resources available over a packet communications network such as the Internet. In addition to the data being transferred, packets include header information such as type of data contained in the packet, i.e. HTML, voice, ASCII, etc., and origination and destination node information. The header information permits error checking, and routing across packet switched networks such as the Internet between devices that may be widely spaced apart. The header information also allows extremely disparate devices to communicate with each other—such as a clock radio to communicate with a computer. Recently published US patent application no. 20020031086, (Welin, Mar. 14, 2002) refers to linking “computers, EP phones, talking toys and home appliances such as refrigerators, microwave ovens, bread machines, blenders, coffee makers, laundry machines, dryers, sweepers, thermostat assemblies, light switches, lamps, fans, drape and window shade motor controls, surveillance equipment, traffic monitoring, clocks, radios, network cameras, televisions, digital telephone answering devices, air conditioners, furnaces and central air conditioning apparatus.”


Communications with storage devices has not kept pace with the trend to disaggregate resources. Disk access has always been under the control of a disk operating system such as DOS, or Microsoft® Windows®. Unfortunately, putting the operating system at the conceptual center of all computing devices has resulted in a dependence on such operating systems, and has tended to produce ever larger and more complicated operating systems. Now that many electronic devices, from personal digital assistants to telephones, digital cameras, and game consoles, are becoming smaller and ever more portable, the dependence on large operating systems has become a liability. One solution is to provide a stripped-down operating system that requires much less overhead. Microsoft® CE® is an example. That solution, however, sacrifices considerable functionality present in the larger systems.


What is needed is a storage device that can be directly accessed by multiple other devices, without the need to go through an operating system.


SUMMARY OF THE INVENTION

In the present invention a storage device has partitions that are separately addressed by distinct IP addresses. This allows direct access of the partitions, on a peer-to-peer basis, by any other device that can communicate using IP. Many limitations on access to the storage device can thereby be eliminated, including geographical limitations, and the need for a given storage partition to be under the central control of a single operating system.


Preferred storage devices support spanning between or among partitions of the same device, as well as between or among different storage devices. Both multicast and proxy spanning are contemplated.


Combinations of the inventive storage devices with each other, and with prior art storage devices are contemplated, in all manner of mirroring and other arrangements.


In still other aspects of the invention, a given storage device can comprise one or more types of media, including any combination of rotating and non-rotating media, magnetic and optical, and so forth.


Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures.





BRIEF DESCRIPTION OF THE DRAWING


FIG. 1 is a schematic of a prior art disk drive split into multiple partitions, but where the entire memory is accessed using a single IP address.



FIG. 2 is a schematic of a prior art storage system in which three disk drives are addressed in their entireties using three different IP addresses.



FIG. 3 is a schematic of a storage device having multiple partitions that are separately addressed by different IP addresses.



FIG. 4 is a schematic of a storage device having multiple partitions that are separately addressed by different IP addresses, and some of the partitions are addressed using multiple IP addresses.



FIG. 5 is a schematic of a storage device having multiple partitions comprising different storage media.



FIG. 6 is a schematic of a storage device having multiple partitions, two of which are spanned using multicast spanning.



FIG. 7 is a schematic of a storage device having multiple partitions, two of which are spanned using proxy spanning.



FIG. 8 is a schematic of a storage system in which three storage devices are logically coupled using multicast spanning.



FIG. 9 is a schematic of a storage system in which three storage devices are logically coupled using proxy spanning.



FIG. 10 is a schematic of a storage system in which partitions of a first storage device are mirrored on partitions of one or more additional storage device using multicast mirroring.





DETAILED DESCRIPTION

Prior art FIG. 1 generally depicts a disk drive 10 that is split into multiple partitions 10A, 10B, 10C . . . 10N. The entire storage area is addressed using a single address IP1, with individual blocks of data being addressed by a combination of IP1 and some other information such as partition and offset, or Logical Block Address (LBA). The data is thus always accessed under the control of a disk operating system that provides the additional information. For that reason drive 10 is usually located very close to the processor that runs the operating system, and is usually connected to a hard bus of a computer, RAID or other system.


It is known to format the various partitions 10A . . . 10N differently from one another, under control of different operating systems. However, the entire memory space comprises a single media type, namely rotating magnetic memory, even though there may be some sort of RAM buffer (not shown).


It should be appreciated that the term “IP” is used herein in a broad sense, to include any networking protocol. Thus, an IP address is used as a euphemism for a network address.


Prior art FIG. 2 generally depicts a storage system 20 in which three disk drives 21, 22, 23 are addressed using three different IP addresses, IP1, IP2, and IP3. The drives can have multiple partitions (drive 21 has three partitions 21A, 21B, 21C (not shown), and drive 23 has two partitions 23A and 23B (not shown)), but here again individual blocks of data are addressed using a combination of the IP address, some other information such as partition and offset, or LBA. Drives 21, 22, 23 can be spanned and/or mirrored, but the data on each drive is always accessed using that drive's particular IP address.


In FIG. 3 is a storage device 30 according to the present invention has three partitions 21A, 21B, 21C, which are separately addressed by different IP addresses IP1, IP2, IP3, respectively. Those skilled in the art will appreciate that showing a small plurality of partitions is merely a matter of convenience, in this and other figures, and that storage device 30 could have any practical number of partitions. Similarly, it should be appreciated that depicting storage devices without partitions indicates that such devices have no partitions.


Utilizing IP addresses to route packets directly to and from partitions facilitates the use of very light communication protocols. In particular, the partitions may be directly addressed at the IP level of TCP/IP or UDP/IP stack. It should be appreciated, however, that in order make use of the IP addresses, the storage device 30 (and indeed the various partitions) would need to have sufficient functionality to communicate using IP. That functionality could be designed into the devices (or partitions), or it could be added onto storage devices using an IP adapter 32 (not shown). Indeed, the adapter in such circumstances would essentially be a simple block-to-packet and packet-to-block translator.


Storage device 30 can be connected to any suitable bus by any suitable means. Thus, the operative principles herein can operate across a wide variety of physical buses and protocols, including ATA, ATAPI, SCSI, Fiber CH, PCMCIA, CardBus, and USB. Storage device 30 can also alternatively or additionally operate across a network acting as a virtual IP bus, with the term “IP” being used herein generically with reference to any internetworking protocol that handles packets. It is contemplated, for example, that a user may have a stand-alone storage device that communicates wirelessly with a Local Area Network (LAN), which in turn may be connected to a WAN or to the Internet. Other devices that are also connected to the network (whether in the home, office, or elsewhere) could directly access one or more partitions of the storage device. For example, an IP capable television (not shown) could display images or movies stored on one partition, while a digital camera (not shown) could store/retrieve images on another partition. Still another partition might hold an operating system and office software for use with a laptop, or even an IP capable display and IP capable keyboard and mouse. Printing from any of the partitions might occur on an IP capable printer that is also connected wirelessly, or by hardwire, to the network.


An interesting corollary is that the partitions or other elements can all communicate as peers on a peer-to-peer network. As used herein, the term “element” refers to a hardware unit that is a functional portion of a device, and traditionally communicates with other units of the same device across a bus, without having its own IP address. This can completely eliminate dependence on any particular operating system, and can eliminate operating systems altogether. In addition, many of the elements attached to the network will be dependent on other elements attached to the network to perform tasks that are not within their individual capacities, and will be able to discover, reserve, and release the resources of other peers needed to perform such tasks. Peers will preferably be able to discover the other elements attached to the network, the characteristics of the other elements attached to the network, and possibly the contents of at least same of the elements attached to the network. Such discovery is accomplished without the assistance of a master device, and will preferably involve direct communication between the peer elements.


Preferred networks will be masterless in that all elements have equal access to the network and the other elements attached to the network. The peer elements of the network will preferably communicate with each other utilizing low-level protocols such as those that would equate to those of the transport and lower layers of the OSI model. Preferred embodiments will utilize TCP and UDP IP protocols for communication between elements.


Storage device 30 is preferably able to dynamically create partitions upon receipt of requests from network elements. For example, when a network element requests use of device 30, the network element may provide a unique identifier, possibly a name, to storage device 30, which in turn associates the identifier with any newly created partition. In some instances the network element may also request a particular storage size to be allocated, including all of the remaining storage available on the storage device 30.


In preferred embodiments, the IP addresses for such partitions are obtained from an address server such as a DHCP server upon request from the storage device 30. It is important to note, however, that address allocation devices such as DHCP servers are not masters, since they don't control the network, elements coupled to the network, or the sharing of resources between elements. Assignment of IP addresses to partitions may additionally or alternatively occur during initialization of the device, such as when it is first turned on.


Since storage device 30 may be associated with only a single network interface card (NIC), it is preferred that storage elements be able to obtain multiple IP addresses despite having a single NIC and a single media access control (MAC) address. This can be accomplished by providing a unique partition identifier to an address server when trying to obtain a IP address from the address server. It is contemplated that associating a name provided by an element with any partition created for that element makes it possible to identify each of the partitions of a storage element, despite the fact that IP address associated with each partition may have changed since the partition was created.


Additional details can be found in concurrently filed PCT application no. PCT/US02/40205 entitled “Communication Protocols, Systems and Methods” and PCT application no. PCTUS02/40198, entitled “Electrical Devices With Improved Communication”, the disclosures of which are incorporated herein by reference.


In FIG. 4, storage device 40 is similar to storage device 30 in that it has multiple partitions 41A, 41B, 41C, 41D that are separately addressed by different IP addresses IP1, IP2, IP3, IP4, respectively. But here some of the partitions are addressed using multiple IP addresses. In particular, partition 41A is addressed with IP1 and IP5. Partition 41D is addressed with IP4, IP6 and IP7.


In FIG. 5 a storage device 50 has multiple partitions comprising different storage media. In this particular example there are 2 partitions of rotating media 50A, 50B, one partition of flash memory 50C. All other practical combinations of these and other media are also contemplated. As in FIG. 3, the various partitions are separately addressed by different IP addresses IP1, IP2, IP3, respectively.


In FIG. 6 a storage device 60 has multiple partitions 60A, 60B, 60C, 60D, addressed by IP addresses IP1, IP2, IP3, IP4, and IP5 (multicast) respectively. Two of these partitions, 60A and 60C, are spanned in that partition 60A extends from logical address a to logical address b, while partition 60C continues from logical address b+1 to logical address c. The spanned set is thus logical address a to logical address c. The spanning here is multicast spanning, because the partitions share multicast IP5 which is used to address both partitions 60A and 60C.


In FIG. 7 a storage device 70 has multiple partitions 70A, 70B, 70C, 70D, addressed by IP addresses IP1, IP2, IP3, IP8, respectively. (The use of IP8 here rather than IP4 is intended to illustrate that the IP addresses need not be consecutive in any manner.) Here again two of the partitions are spanned, 70A and 70C, in that partition 70A extends from logical address a to logical address b, while partition 70C continues from logical address b+1 to logical address c. The spanned set is thus once again logical address a to logical address c. Here, however, we are dealing with proxy spanning as opposed to multicast spanning. IP1 is used to address partition 70A, while the second part of the spanned data, in partition 70C, is addressed by the IP1 proxy using IP3. Of course, it is possible to combine multicast spanning and proxy spanning within the same storage device.


In FIG. 8 a storage system 100 has three storage devices 110, 120, and 130 coupled to depict multicast spanning. Device 110 has three partitions 110A, 110B and 110C, which are separately addressed using IP addresses IP1, IP2, and IP3, respectively. Device 120 has four partitions 120A, 120B, 120C, and 120D, which are separately addressed using IP addresses IN, IP5, IP6, and IP7, respectively. Device 130 is not partitioned, which for our purposes is the same as saying that it only has one partition. The entirely of the storage area of device 130 is addressed using IP address IP8. The spanning in this case is among all three drives. Partition 110C extends from logical address a to logical address b; partition 120D continues from logical address b+1 to logical address c, and the data space of device 130 extends from logical address c+1 to logical address d. The data set extends from logical address a to logical address d.



FIG. 9 is similar to FIG. 8, in that spanning occurs across three drives, and the data set extends from logical address a to logical address d. The main conceptual difference is that the storage devices are logically coupled using proxy spanning rather than multicast spanning. Here, device 210 has three partitions 210A, 210B and 210C, which are separately addressed using IP addresses IP1, IP2, and IP3, respectively. Device 230 is not partitioned. The entirely of the storage area of device 230 is addressed using IP address IP4. Device 220 has three partitions, 220A, 220B and 220C, which are separately addressed using IP addresses IP4, IP5, and IP6, respectively. Partition 210C extends from logical address a to logical address b; the data space of partition 220C continues from logical address b+1 to logical address c, and partition 230 extends from logical address c+1 to logical address d.


As elsewhere in this specification, the specific embodiments shown with respect to FIG. 9 are merely examples of possible configurations. A greater or lesser number of storage devices could be utilized, and indeed spanning may be protean, in that devices and/or partitions may be added to or dropped from the spanning over time. There can also be any combination of multicast and proxy spanning across and/or within storage devices, which may have the same or different media. Moreover, the use of IP addresses facilitates physically locating the various storage devices virtually anywhere an IP network can reach, regardless of the relative physical locations among the devices.


In FIG. 10 a storage system 300 provides mirroring of partitions between three different physical storage devices 310, 320 and 330. This could be done by proxy, in a manner analogous to that described above for proxy spanning, or in higher performance systems using multicasting. Thus, partitions in multiple storage devices are addressed using the same IP address. In this particular embodiment, storage device 310 has partitions 310A, 310B, and 310C, addressed using IP addresses IP1, IP2, IP3 and IP9. Storage device 320 has partitions 320A, 320B, and 320C, addressed using IP addresses IP4, IP5, IP6 and IP9. Write requests to IP3 or IP9 will result in partition 310C, 320C and 330C storing the same data. Read requests to IP1 address will result in 310C, 320C and 330C responding with the same information, with presumably the requester using whichever data arrives first. In the Multicast form it may be preferred that device 310,320 and 330 listen for the first data returned by any member of the mirrored set, and then remove that request from their request que if another device completes the request before they complete the request.


Communications


In preferred embodiments, communications between a storage element and a non-storage element, will utilize a datagram protocol in which data blocks are atomically mapped to a target device. A datagram sent between elements will preferably comprise command (CMD), logical block address (LBA), data, and token fields, and no more than X additional bytes where X is one of 1, 2, 7, 10, 17, and 30. The data field of such a datagram is preferably sized to be the same as the block size (if applicable) of the element to which the datagram is addressed. As such, an element sending a quantity of data to a storage element where the quantity of data is larger than the block size of the storage element will typically divide the quantity of data into blocks having the same size as the blocks of the storage element, assign LBAs to the blocks, and send each block and LBA pair to the storage element in a datagram.


It is preferred that the datagrams be communicated between elements encapsulating them within addressed packets such as IP packets, and the IP address of the encapsulating packet be used to identify both the element a packet is intended to be sent to, and the partition within the element that the datagram pertains to.


It is preferred that datagram recipients handle datagrams on a first come, first served basis, without reordering packets, and without assembling the contents of the data fields of datagrams into a larger unit of data prior to executing a command identified in the CMD field. As an example, an storage element may receive a datagram containing a block of data, an LBA, and a write command. The storage element, without having to wait for any additional packets, utilizes the IP address of the packet enclosing the datagram to identify the partition to be used, and utilizes the LBA to identify the location within the partition at which the data in the data field is to be written.


Handling the data in individual datagrams as they arrive rather than reassembling the data permits the use of an implied ACK for each command. Using an implied rather than an explicit ACK results in a substantial increase in performance.


Marketing of Storage Devices and Adapters


It is contemplated that once persons in the industry recognize the benefits of having storage devices having partitions that are accessed using their own IP addresses, companies will start producing and/or marketing such devices It is also contemplated that companies will start producing and/or marketing adapters that includes a functionality (hardware or software, or come combination of the two) to permit traditional disk drives, flash memories, and other storage devices to operate in that manner.


Thus, methods falling within the inventive subject matter include manufacturing or selling a disk drive or other storage device in which the partitions can utilize their own IP addresses to execute packet communication with other network elements. Other inventive methods include manufacturing or selling adapters that enable prior art type storage devices to do the same. Indeed it is contemplated that companies will recognize that such adapters are available, and will continue to manufacture or sell prior art type storage devices, knowing (or even advertising) that users can employ such adapters to enable the prior art type storage devices to use in an infringing manner.


Thus, specific embodiments and applications of the inventive storage devices have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.

Claims
  • 1. An apparatus comprising: a network interface;one or more storage media having a plurality of partitions; andone or more storage elements communicatively coupled to the one or more storage media and the network interface and configured to receive a packet from an external network via the network interface, the packet including an internet protocol (IP) address that uniquely identifies a selected subset of the plurality of partitions distinct from a non-selected subset of the plurality of partitions, the selected subset including at least two non-consecutive partitions, the packet further including a datagram having a command and a block address; andto access a block within a partition of the selected subset based at least in part on the command, the block address, and the IP address.
  • 2. The apparatus of claim 1, wherein the one or more storage elements comprise a packet-to-block translator.
  • 3. The apparatus of claim 1, wherein the block address comprises a logical block address, and the block within the partition is a physical block that corresponds to the logical block address.
  • 4. The apparatus of claim 3, wherein the partition is a first partition, the physical block is a first physical block, and a second partition of the selected subset includes a second physical block that corresponds to the logical block address.
  • 5. The apparatus of claim 1, wherein the selected subset comprises a plurality of physical blocks that respectively correspond to a plurality of consecutive logical block addresses.
  • 6. The apparatus of claim 1 wherein a storage element of the one or more storage elements is configured to execute the command to store data within the partition independent of any other received packets.
  • 7. The apparatus of claim 1, wherein the one or more storage elements are configured to receive the packet from a network peer.
  • 8. The apparatus of claim 1, wherein the one or more storage elements are configured to implement a protocol compatible with a user datagram protocol communication and a transmission control protocol communication.
  • 9. The apparatus of claim 1, wherein the packet comprises a user datagram protocol packet.
  • 10. The apparatus of claim 1, wherein a first partition of the selected subset is on a first storage device of the one or more storage media, and a second partition of the selected subset is on a second storage device of the one or more storage media.
  • 11. The apparatus of claim 10, wherein the first storage device is a first type of storage media and the second storage device is a second type of storage media.
  • 12. The apparatus of claim 1, wherein the IP address is a first IP address and a first partition of the selected subset is associated with a second IP address that uniquely identifies the first partition.
  • 13. The apparatus of claim 1, wherein the one or more storage elements are configured to implement a multicast span.
  • 14. A method comprising: receiving, by one or more storage elements, a packet via a network connection, the packet including an internet protocol (IP) address that uniquely identifies a selected subset of a plurality of partitions, of one or more storage media, distinct from a non-selected subset of the plurality of partitions, the selected subset including at least two non-consecutive partitions, the packet further including a datagram having a command and a block address; andaccessing, by a storage element of the one or more storage elements, a block within a partition of the selected subset based on the command, the block address, and the IP address.
  • 15. The method of claim 14, wherein the partition is a first partition, the physical block is a first physical block, and a second partition of the selected subset includes a second physical block that corresponds to the logical block address and the method further comprises: accessing the second physical block based on the packet.
  • 16. The method of claim 14, further comprising: accessing a block within each partition of the selected subset based on the command, the block address, and the IP address.
  • 17. The method of claim 14, further comprising: accessing, by a plurality of storage elements, a respective plurality of partitions based on the command, the block address, and the IP address.
  • 18. A method comprising: receiving, via a network interface, a partition request from a network entity:partitioning, based on the partition request, one or more storage media into a plurality of partitions;assigning an internet protocol (IP) address to a selected subset of the plurality of partitions, wherein the selected subset includes at least two non-consecutive partitions and the IP address uniquely identifies the selected subset distinct from a non-selected subset of the plurality of partitions;transmitting, via the network interface to the network entity, the IP address;receiving, via the network interface, a packet including the IP address and a block address;determining that the block address uniquely corresponds to a first partition of the selected subset; andaccessing the first partition based on said determining.
  • 19. The method of claim 18, further comprising: implementing a multicast span based on receipt of the packet.
  • 20. The method of claim 19, wherein the implementing of the multicast span comprises: issuing duplicative commands to the selected subset of the plurality of partitions.
RELATED APPLICATION

The present application is a continuation of U.S. patent application Ser. No. 10/473,509, entitled “DATA STORAGE DEVICES HAVING IP CAPABLE PARTITIONS,” issued on Aug. 23, 2011 as U.S. Pat. No. 8,005,918, which is a national stage entry of PCT/US02/40199 filed Dec. 16, 2002 and claims priority to U.S. Provisional Application No. 60/425,867, entitled “DATA COMMUNICATION AND STORAGE METHODS AND DEVICES,” filed on Nov. 12, 2002. The specifications of these applications are fully incorporated by reference.

US Referenced Citations (188)
Number Name Date Kind
4422171 Wortley Dec 1983 A
4617657 Drynan Oct 1986 A
4890227 Watanabe Dec 1989 A
5129088 Auslander Jul 1992 A
5193171 Shinmura Mar 1993 A
5444709 Riddle Aug 1995 A
5546541 Drew Aug 1996 A
5590124 Robins Dec 1996 A
5590276 Andrews Dec 1996 A
5634111 Oeda May 1997 A
5742604 Edsall Apr 1998 A
5758050 Brady May 1998 A
5771354 Crawford Jun 1998 A
5850449 McManis Dec 1998 A
5867686 Conner Feb 1999 A
5884038 Kapoor Mar 1999 A
5889935 Ofek Mar 1999 A
5930786 Carino, Jr. Jul 1999 A
5937169 Connery Aug 1999 A
5948062 Tzelnic Sep 1999 A
5949977 Hernandez Sep 1999 A
5983024 Fye Nov 1999 A
5991891 Hahn Nov 1999 A
6018779 Blumenau Jan 2000 A
6081879 Arnott Jun 2000 A
6101559 Schultz Aug 2000 A
6105122 Muller Aug 2000 A
6128664 Yanagidate Oct 2000 A
6157935 Tran Dec 2000 A
6157955 Narad Dec 2000 A
6202060 Tran Mar 2001 B1
6246683 Connery Jun 2001 B1
6253273 Blumenau Jun 2001 B1
6259448 McNally Jul 2001 B1
6275898 Dekoning Aug 2001 B1
6288716 Humpleman Sep 2001 B1
6295584 DeSota Sep 2001 B1
6330236 Ofek Dec 2001 B1
6330615 Gioquindo Dec 2001 B1
6330616 Gioquindo Dec 2001 B1
6389448 Primak May 2002 B1
6401183 Rafizadeh Jun 2002 B1
6434683 West Aug 2002 B1
6449607 Tomita Sep 2002 B1
6466571 Dynarski Oct 2002 B1
6470342 Gondi Oct 2002 B1
6473774 Cellis Oct 2002 B1
6480934 Hino Nov 2002 B1
6487555 Bharat Nov 2002 B1
6535925 Svanbro Mar 2003 B1
6567863 Lafuite May 2003 B1
6597680 Lindskog et al. Jul 2003 B1
6601101 Lee Jul 2003 B1
6601135 McBrearty Jul 2003 B1
6618743 Bennett Sep 2003 B1
6629162 Arndt Sep 2003 B1
6629264 Sicola Sep 2003 B1
6636958 Abboud Oct 2003 B2
6678241 Gai Jan 2004 B1
6683883 Czeiger Jan 2004 B1
6693912 Wang Feb 2004 B1
6701432 Deng Mar 2004 B1
6711164 Lee Mar 2004 B1
6732171 Hayden May 2004 B2
6732230 Johnson May 2004 B1
6741554 D'Amico May 2004 B2
6742034 Schubert May 2004 B1
6754662 Li Jun 2004 B1
6757845 Bruce Jun 2004 B2
6772161 Mahalingam Aug 2004 B2
6775672 Mahalingam Aug 2004 B2
6775673 Mahalingam Aug 2004 B2
6795534 Noguchi Sep 2004 B2
6799244 Tanaka Sep 2004 B2
6799255 Blumenau Sep 2004 B1
6834326 Wang Dec 2004 B1
6853382 Van Dyke Feb 2005 B1
6854021 Schmidt Feb 2005 B1
6862606 Major Mar 2005 B1
6876657 Brewer Apr 2005 B1
6882637 Le Apr 2005 B1
6895461 Thompson May 2005 B1
6895511 Borsato May 2005 B1
6901497 Tashiro May 2005 B2
6904470 Ofer Jun 2005 B1
6907473 Schmidt Jun 2005 B2
6912622 Miller Jun 2005 B2
6917616 Normand Jul 2005 B1
6922688 Frey, Jr. Jul 2005 B1
6934799 Acharya Aug 2005 B2
6941555 Jacobs Sep 2005 B2
6947430 Bilic Sep 2005 B2
6977927 Bates Dec 2005 B1
6983326 Vigue Jan 2006 B1
6985956 Luke Jan 2006 B2
7051087 Bahl May 2006 B1
7065579 Traversat Jun 2006 B2
7069295 Sutherland et al. Jun 2006 B2
7072823 Athanas Jul 2006 B2
7072986 Kitamura Jul 2006 B2
7073090 Yanai Jul 2006 B2
7111303 Macchiano Sep 2006 B2
7145866 Ting Dec 2006 B1
7149769 Lubbers Dec 2006 B2
7152069 Santry Dec 2006 B1
7181521 Knauerhase Feb 2007 B2
7184424 Frank Feb 2007 B2
7188194 Kuik Mar 2007 B1
7200641 Throop Apr 2007 B1
7203730 Meyer Apr 2007 B1
7225243 Wilson May 2007 B1
7237036 Boucher Jun 2007 B2
7243144 Miyake Jul 2007 B2
7254620 Iwamura Aug 2007 B2
7263108 Kizhepat Aug 2007 B2
7278142 Bandhole Oct 2007 B2
7296050 Vicard Nov 2007 B2
7404000 Lolayekar Jul 2008 B2
7421736 Mukherjee Sep 2008 B2
7428584 Yamamoto Sep 2008 B2
7475124 Jiang Jan 2009 B2
7526577 Pinkerton Apr 2009 B2
7535913 Minami May 2009 B2
7536525 Chandrasekaran May 2009 B2
7558264 Lolayekar Jul 2009 B1
7707304 Lolayekar Apr 2010 B1
20010020273 Murakawa Sep 2001 A1
20010026550 Kobayashi Oct 2001 A1
20010034758 Kikinis Oct 2001 A1
20010049739 Wakayama Dec 2001 A1
20020026558 Reuter Feb 2002 A1
20020029256 Zintel Mar 2002 A1
20020029286 Gioquindo Mar 2002 A1
20020031086 Welin Mar 2002 A1
20020039196 Chiarabini Apr 2002 A1
20020062387 Yatziv May 2002 A1
20020065875 Bracewell May 2002 A1
20020087811 Khare Jul 2002 A1
20020091830 Muramatsu Jul 2002 A1
20020126658 Yamashita Sep 2002 A1
20020133539 Monday Sep 2002 A1
20030018784 Lette Jan 2003 A1
20030023811 Kim Jan 2003 A1
20030065733 Pecone Apr 2003 A1
20030093567 Lolayekar May 2003 A1
20030118053 Edsall Jun 2003 A1
20030130986 Tamer Jul 2003 A1
20030154305 Bethmangalkar et al. Aug 2003 A1
20030172157 Wright Sep 2003 A1
20030182349 Leong Sep 2003 A1
20030202510 Witkowski Oct 2003 A1
20040078465 Coates Apr 2004 A1
20040088293 Daggett May 2004 A1
20040160975 Frank Aug 2004 A1
20040170175 Frank Sep 2004 A1
20040181476 Smith Sep 2004 A1
20040184455 Lin Sep 2004 A1
20040213226 Frank Oct 2004 A1
20040215688 Frank Oct 2004 A1
20050138003 Glover Jun 2005 A1
20050144199 Hayden Jun 2005 A2
20050165883 Lynch Jul 2005 A1
20050166022 Watanabe Jul 2005 A1
20050175005 Brown Aug 2005 A1
20050198371 Smith Sep 2005 A1
20050246401 Edwards Nov 2005 A1
20050267929 Kitamura Dec 2005 A1
20050270856 Earhart Dec 2005 A1
20050286517 Babbar Dec 2005 A1
20060029068 Frank Feb 2006 A1
20060029069 Frank Feb 2006 A1
20060029070 Frank Feb 2006 A1
20060036602 Unangst Feb 2006 A1
20060101130 Adams May 2006 A1
20060126666 Frank Jun 2006 A1
20060168345 Siles Jul 2006 A1
20060176903 Coulier Aug 2006 A1
20060182107 Frank Aug 2006 A1
20060206662 Ludwig Sep 2006 A1
20060253543 Frank Nov 2006 A1
20060272015 Frank Nov 2006 A1
20070043771 Ludwig Feb 2007 A1
20070083662 Adams Apr 2007 A1
20070101023 Chhabra May 2007 A1
20070110047 Kim May 2007 A1
20070168396 Adams Jul 2007 A1
20070230476 Ding Oct 2007 A1
20070237157 Frank Oct 2007 A1
Foreign Referenced Citations (18)
Number Date Country
0485110 May 1992 EP
0654736 May 1995 EP
0700231 Mar 1996 EP
0706113 Apr 1996 EP
61033054 Feb 1986 JP
62233951 Oct 1987 JP
63090942 Apr 1988 JP
05347623 Dec 1993 JP
7325779 Dec 1995 JP
09149060 Jun 1997 JP
10-333839 Dec 1998 JP
2000267979 Sep 2000 JP
2002318725 Oct 2002 JP
2004054562 Feb 2004 JP
2004056728 Feb 2004 JP
223167 Nov 2004 TW
WO0215018 Feb 2002 WO
WO0271775 Sep 2002 WO
Non-Patent Literature Citations (43)
Entry
Anderson, et al., “Serverless Network File Systems,” In Proceedings of the 15th Symposium on Operating Systems Principles, Dec. 1995.
Beck, Micah, et al., An End-to-End Approach for Globally Scalable network Storage, ACM SIGCOMM Computer Communication Review; vol. 32, Issue 4, Proceedings of the 2002 SIGCOMM Conference; pp. 339-346; Oct. 2002.
Bruschi, et al., “Secure multicast in wireless networks of mobile hosts: protocols and issues”, Mobile Networks and Applications, vol. 7, issue 6 (Dec. 2002), pp. 503-511.
Chavez, A Multi-Agent System for Distributed Resource Allocation, MIT Media Lab, XP-002092534, Int'l Conference on Autonomous Agents, Proceedings of the First Int'l Conference on Autonomous Agents, Marina del Rey, California, US, Year of Publication: 1997.
Cisco Systems, “Computer Networking Essentials,” Copyright 2001.
Gibson, Garth; A Cost Effective High-Bandwidth Storage Architecture; ACM SIGOPS Operating Systems Review, col. 32, issue 5, pp. 92-103; 1998.
Gibson, Garth; File Server Scaling with Network-Attached Secure Disks; Joint Int'l Conference on Measurement & Modeling of Computer Systems Proceedings of the 1997 ACM SIGMETRICS Int'l Conference on Measurement & Modeling of Computer Systems; pp. 272-284; 1997.
IBM Technical Disclosure Bulletin, Vo. 35, No. 4a, pp. 404-405, XP000314813, Armonk, NY, USA, Sep. 1992.
Kim et al., “Internet Multicast Provisioning Issues for Hierarchical Architecture”, Dept of Computer Science, Chung-Nam National University, Daejeon, Korea, Ninth IEEE International Conference, pp. 401-404., IEEE, published Oct. 12, 2001.
Lee et al. “A Comparison of Two Distributed Disk Systems” Digital Systems Research Center—Research Report SRC-155, Apr. 30, 1998, XP002368118.
Lee, et al. “Petal: Distributed Virtual Disks”, 7th International Conference on Architectural Support for Programming Languages and Operation Systems. Cambridge, MA., Oct. 1-5, 1996. International Conference on Architectural Support for Programming Languages and Operation Systems (ASPLOS), New, vol. Conf. 7, pp. 84-92, XP000681711, ISBN: 0-89791-767-7, Oct. 1, 1996.
Lin, et al., “RMTP: A Reliable Multicast Transport Protocol,” Proceedings of IEEE INFOCOM '96, vol. 3, pp. 1414-1424, 1996.
Quinn, et al., “IP Multicast Applications: Challenges and Solutions,” Network Working Group, RFC 3170, Sep. 2001.
Robinson, Chad; The Guide to Virtual Services; Linux Journal, vol. 1997 Issue 35; Mar. 1997.
Satran et al. “Internet Small Computer Systems Interface (iSCSI)” IETF Standard, Internet Engineering Task Force, IETF, CH, XP015009500, ISSN: 000-0003, Apr. 2004.
Satran et al., “Internet Small Computer Systems Interface (iSCSI)” Internet Draft draft-ietf-ips-iscsi-19.txt, Nov. 3, 2002.
Virtual Web mini-HOWTO Parag Mehta; www.faqs.or/docs/Linux-mini/Virtual-Web.html; Jun. 6, 2001.
VMWare Workstation User's Manual, VMWare, Inc., p. 1-420, XP002443319; www.vmware.com/pdf/ms32—manual.pdf; p. 18-21; p. 214-216; p. 273-282; copyright 1998-2002.
WebTen User's Guide; Version 3.0, Jan. 2000; http://www.tenan.com/products/webten/WebTenUserGuide/1—Introduction.html; Jan. 2000.
WebTen User's Guide; Version 7.0; http://www.tenon.com/products/webten/WebTenUserGuide/8—VirtualHosts.html, Chapter 8; Mar. 2008.
Office Action re U.S. Appl. No. 10/473,509 dated Jul. 1, 2010.
Final Office Action re U.S. Appl. No. 11/243,137 dated Mar. 17, 2010.
Final Office Action re U.S. Appl. No. 11/243,143 dated Feb. 19, 2010.
Notice of Allowance for U.S. Appl. No. 11/243,143 mailed Sep. 7, 2010.
Final Office Action re U.S. Appl. No. 11/479,711 dated Apr. 1, 2009.
Notice of Allowance for U.S. Appl. No. 11/479,711 mailed Jun. 22, 2010.
Supplemental Notice of Allowance for U.S. Appl. No. 11/479,711 mailed Dec. 28, 2010.
International Search Report for PCT/US2002/040199 mailed May 8, 2003.
Written Opinion for PCT/US2002/040199 mailed Jun. 21, 2004.
International Preliminary Examination Report for PCT/US2002/040199 mailed Sep. 24, 2004.
Chinese Office action for 02829871.3 mailed Sep. 8, 2006.
Chinese Office action for 02829871.3 mailed Feb. 15, 2008.
Chinese Office action for 02829871.3 mailed Aug. 1, 2008.
Chinese Notice of Grant for 02829871.3 mailed Jun. 26, 2009.
European Search Report for 02797354.4 mailed Jul. 31, 2007.
European Office action for 02797354.4 mailed Nov. 9, 2007.
European Office action for 02797354.4 mailed May 23, 2008.
European Office action for 02797354.4 mailed Aug. 16, 2010.
Indian Office action for 1600/DELNP/05 mailed Jan. 9, 2006.
Indian Office action for 1600/DELNP/05 mailed Oct. 3, 2006.
Indian Office action for 1600/DELNP/05 mailed Jan. 5, 2007.
Japanese Office action for 2004-551382 mailed Nov. 1, 2005.
Japanese Final Office action for 2004-551382 mailed Mar. 28, 2006.
Related Publications (1)
Number Date Country
20110283084 A1 Nov 2011 US
Provisional Applications (1)
Number Date Country
60425867 Nov 2002 US
Continuations (1)
Number Date Country
Parent 10473509 US
Child 13193544 US