The present application claims priority to U.S. application Ser. No. 11/139,206 filed on May 26, 2005 and issued on Nov. 17, 2009 as U.S. Pat. No. 7,620,981, entitled “Virtual Devices and Virtual Bus Tunnels, Modules, and Methods,” the entire specification of which is hereby incorporated by reference in its entirety for all purposes.
The field of the invention is virtual device communication.
Computing systems and devices from the smallest of embedded systems to the largest of super computers require resources to accomplish their objectives. Possible resources include CPUs, memory, monitors, hard disk drives, networks, peripherals, and a host of other devices that combine to form a complete computing experience for the user. Over the last decade, users of computing systems and devices have become less sophisticated with respect to the complexity of the computing systems. Users are now more concerned with their computing experience rather than in the details of how their computing system operates. The migration of computing systems toward less sophisticated users has pushed such systems to become more and more complex to compensate for the users lack of computer system expertise. The results of this migration can be seen by the continued increase in complexity and power of current operating systems, including Windows® or Linux.
Computers running Windows® or Linux present a single data portal to a user, where the computer contains nearly all aspects of the user's environment including data, applications, preferred settings, and so on. However, many users currently need multiple portals to their computing environment rather than a single system as they travel from home, to abroad, or to work. Not only do users wish to have access to their data as they move, but they also wish to take their computing experience with them. Laptop computers offer the ability to move, but at great expense. Consequently, numerous systems have been developed to enable a more decentralized user experience approach, including web portals such as Yahoo!®, where a user accesses their email interface from anywhere in the world. Users are no longer concerned about the location of their computing resources, but are rather concerned about having access to their data and having the same computing experience no matter where they are or what device they use as an interface.
A number of problems arise due to users' demand for a uniform experience. First, huge computing services including those being developed by Yahoo!™, Microsoft®, and even Google™ require large centralized server systems to house the massive amount of user's data. Such servers are expensive to operate and maintain. Second, users' experiences tend to be limited to the capabilities these services have to offer, (including email, photo organization, web browsing, searching, and so forth), rather than a complete generalized experience. Third, these large centralized systems do not present users with their complete data set, but rather a small fraction of the data. Nor can the systems necessarily run general applications that users desire when working with their data.
In order to address these limitations and provide a cost effective computing experience for users, a fully decentralized approach is required. Some decentralized computing efforts are well known, including for example, Search for Extra-Terrestrial Intelligence (SETI@Home), the Great Internet Mersenne Prime Search (GIMPS), and various file sharing programs such as BitTorrent, and Kazaa. Although average users can access these applications and programs, the applications offer only a limited decentralized user's experience, which focus on specific capabilities or services rather than the generalized computing needs of the user.
Several decentralized computing platforms exist today that are intended to be generalized, including Beowulf Linux clusters and advanced web services including Microsoft .NET. Unfortunately, these and other decentralized computing platforms have several flaws that impact the user. Among other things they require new software that the user must then purchase or license (as in the case of .NET), or write themselves as in the case of clusters. A further problem is that the new software still requires strong centralized computing systems to support the overall computing experience. Such approaches attack the problem from an application or user point of view, which causes the solution space to be too broad. A broad point of view then tends to limit the solution to course grained approach that results in a high cost of implementation to the user.
A better approach to decentralization of computing resources attacks the problem from a more fined grained perspective, where the system scales incrementally in a more cost effective manner rather than at a systems level were incremental costs are high. Rather than approaching the problem from a computing system standpoint, decentralization can be approached from a fundamental computing resource perspective, in which a computing resource distills to the physical device element level (where a device element could include, but not limited to, a CPU, memory, hard disk, or even a monitor). When these devices combine in a decentralized fashion, a wide range of users gain the benefits of decentralization. In addition, decentralization of devices allows devices to focus on doing what they do best and to be responsible for their own services and capabilities; thereby offloading peripheral management work from an operating system and CPU. Consequently, the operating system and CPU can focus on providing more bandwidth to an application. Decentralization also provides the benefit of scalability, such that a computing system scales up at the atomic device level as opposed to at the system level. Users scale their computing systems by purchasing devices which are more cost effective than complete systems. In addition, older generations of decentralized devices still remain useful and contribute to the over computing system without having to be discarded.
Decentralization, by its very nature, includes the concept of virtualization, where devices are distributed over a network but “appear” as locally connected from the perspective of other devices. With device virtualization provided over a network, the computing resources no longer have to be local to the user. This implies a user gains access to their computing experience and resources from any connected location once the user takes appropriate authentication or security measures. Such a decentralized computing system addresses the remaining problems with current architectures: users maintain access to their data, users can use multiple interfaces while maintaining their computing experience, applications do not have to be rewritten because decentralization occurs at the device level rather than the system level, and costs to the user decrease because the user scales their system by purchasing devices at the atomic unit rather than buying at the system level.
Beyond the benefits to the user, decentralization at the device level also creates opportunities for creating new capabilities or services by using decentralized devices as building blocks. Just as devices focus on doing what they do well, new products can be built that aggregate device level capabilities or services into more powerful capabilities or services while maintaining scalability at device level. The new products offer the powerful capabilities or services as a “device” to the rest of the system even though the “devices” are actually virtual in nature. For instance, hard disk drives can be aggregated into a network storage system with RAID capabilities, where the entire storage system appears as a single storage device. The network storage system focuses on storing and retrieving data, without incurring overhead associated with complete system architectures. Because the drives are decentralized and virtualized, the storage system can still be scaled at the atomic level, the hard disk drive, which is much more cost effective to the end user.
To fully realize the benefits and capabilities offered by decentralization, devices and virtual devices need to pass information to each other in an organized way, as if they exist on a virtualized bus. Therefore, there is a need for modules and methods that:
Such a set of modules combine with target devices to offer aggregated capabilities, and can also integrate into existing computing systems. When installed in a computing system such as Windows®, the module could take the form of a device driver and the aggregate capabilities or services of the remote target devices appear as a single device.
A number of benefits are realized from this approach. First, computing systems employ such modules to distribute the locality of their physical device elements. This is beneficial in cases where physical security is needed or where geographic isolation offers protection against acts of God. Second, users purchase enhanced capabilities by purchasing computing systems resources in discreet units rather than complete systems. If they wish to double the computing power of their system, the consumer purchases a single CPU (assuming an appropriate module is attached to the CPU) and adds it to the decentralized system. The cost for a single device is much less than the cost of a complete system. Along these lines, old components can be retained and still provide useful capabilities even though they are not as powerful as the new component because the old components can still function within the decentralized system. Third, the customer has access to their experience from anywhere in the world because the virtual system bus extends to any length. Once a user authenticates to their environment, the distributed system presents the user with their preferred experience to them no matter the actual location of the discreet resources housing the user's data. Forth, more complex structures can be created from the discreet device building blocks. For instance, sophisticated RAID structures can be created by using hard drives and appropriately programmed modules attached to the drives. The RAID structures appear as a single drive to the user or the user's applications.
Thus, specific compositions and methods of virtual device communication, information packets transformation, and virtual bus tunnels have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the disclosure. Moreover, in interpreting the disclosure all terms should be interpreted in the broadest possible manner consistent with the context. In particular the terms “comprises” and “comprising” should be interpreted as referring to the elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps can be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.
The present invention is directed toward virtual devices and information packets used to communicate among virtual devices, where the information packets are addressed to virtual devices. Virtual devices contained within modules coordinate with other virtual devices within other modules in a peer-to-peer fashion to offer aggregated capabilities. Information packets comprise host addresses of virtual devices and comprise identifiers (IDs). IDs comprise sufficient information for routing the packets over a network to a final destination and comprise sufficient information to determine how the information held within the packet should be conveyed to a physical target device. Modules comprising virtual devices and the ability to create information packets offer a virtual bus tunnel for virtual devices so they can coordinate behavior in a distributed fashion forming. Various embodiments of such modules offer a number of interesting features that allows devices, either local or remote with respect to the target devices, to offer aggregate capabilities to device users than the users would ordinarily have access to at the device level. Modules capabilities include, but not necessarily limited to, distributed data storage systems.
Virtual Devices
In our own prior art we disclosed physical device elements, (i.e. hardware devices) that don't have a unique frame address but do have a host address. Specific examples included disk partitions within a hard drive, RAM, or CPUs. We also disclosed and claimed protocols, methods and systems for using the host addresses to access the physical device elements.
As used herein, the term “virtual device” means anything (hardware, firmware, or software during its execution) that has a host address, but doesn't have a unique frame address, through which to represent itself as an operational device to an external application. A host address is any physical interface independent address, and includes, for example, IP addresses, IPv4 addresses, and IPv6 addresses. Host addresses are also considered herein to include domain names and URLs (web page addresses) because they resolve to a network address based on applications or protocols. A frame address is a physical media interface address within a packet that frames a host address. For example, in Ethernet a frame address includes a MAC address, in Frame Relay a frame address includes the DLCI or the LAN adapter address, and in ATM a frame address includes an AES address. USB usually uses frame addresses, but can also use packets that do not have frame addresses.
Network addressable virtual devices are virtual devices that are addressed at the network level of the OSI model, as opposed for example, to the session, presentation, or application level. Thus, a mapped Windows® drive is addressed at the application level, and therefore would be virtual device, but not a network addressable virtual device. On the other hand, assume that a physical disk drive has partitions P1-Pn. Partition P1 can be a network addressable virtual device using techniques disclosed herein, and it would be a physical device element. A logical partition mapping data block addresses to partition P1 disk locations can also be a network addressable virtual device, but it would not be a physical device element because the mapping of the data blocks to physical disk locations can be arbitrary. A logical group G1 consisting of several logical partitions P1-Pm can also be a network addressable virtual device, and a logical volume V1 consisting of several logical groups G1-Go can also be a virtual device. Of course, neither the logical group G1 nor the logical volume V1 would be physical device elements. Therefore, a collection of network addressable virtual devices can also be considered a network addressable virtual device. Consequently, within this document “virtual device” can refer to both a single virtual device and to a collection of network addressable virtual devices working together.
In preferred embodiments, virtual devices comprise functionality such that the virtual devices can offer remote applications, users, or other devices access to aggregated capabilities of the physical target devices. Such virtual devices do not necessarily have to have a one-to-one correspondence to target devices, but these and other virtual devices do have to represent themselves as operational devices to external applications. For example, an independently addressable logical partition on a disk drive that responds properly to read and write requests based on logical block addresses would be considered a virtual device. Virtual devices also comprise any software construct or functionality including a function, application, program, service, device driver, task, process, thread, or other non-physical device elements. Those skilled in the art of firmware or software development will recognize that a virtual device is a logical construct. Consequently, numerous coding structures for virtual devices are possible including structures that represent multiple virtual devices.
Although multiple virtual devices can share a single frame address, each virtual device within a system must have its own host address to avoid conflict. This is somewhat analogous to “multi-homing” where an individual computer can have multiple IP addresses and multiple network interfaces. But the similarity is not perfect because multi-homing assigns IP addresses only to the physical network interfaces, and not to sets of functionalities as represented by applications or programs. In addition, the multiple IP addresses are usually from different sub-nets to ensure link redundancy should a route fail. Still another difference from multi-homing is the sheer number of addresses available. Because virtual devices can share frame addresses, there may be tens, hundreds, or thousands of addresses. Multi-homed systems usually have only two or three addresses.
In a particularly simple embodiment, a virtual device can merely be an electronic front end to a physical disk drive that includes for example, a processing unit (CPU) and other computer chips, an electronic network interface, an electronic disk drive interface, random access memory (RAM), a printed circuit board (PC board), a power supply, and interconnecting wires or traces. In a more sophisticated embodiment, a virtual device can be a software driver or application, which would of course have to reside on a machine readable memory, and execute on some form of physical processing unit. Non-operating software would not typically have a host address and therefore would not be considered a virtual device.
In still another aspect it is contemplated that one could write software that would execute on a computer or other hardware as a virtual device. From that perspective the inventive subject matter includes methods of writing such software, recording such software on a machine readable medium, licensing, selling, or otherwise distributing such software, installing, and operating such software on suitable hardware. Moreover, the software per se is deemed to fall within the scope of the inventive subject matter.
Information Packets
As used herein an “information packet” is any packet passing through a virtual device containing information. An information packet can include an ID in a split-ID format, a contiguous-ID format, or a device format.
Modules
As used herein, the term “module” means any combination of hardware, software, or firmware that provides: (a) a target device interface; (b) network interface; and (c) packet processor. Packet processor can include a virtual device manager, a virtual device, and a virtual bus tunnel. In instances where the module includes software, the software will, of course, reside at least temporarily on a machine-readable memory, and execute on a processing unit of an electronic device, but either or both of the machine-readable memory and the processing units can be external to the module.
Virtual Device Management
Virtual device management refers to the capability of a module to manage virtual devices within the module that appear to be real devices external to the module, where management comprises all aspects relating to controlling the existence or behavior of the virtual device constructs. It is contemplated that management comprises functionality to create, to destroy, to suspend, to resume, to modify, to secure, or other related functionality.
Virtual Bus Tunnel
In order for the virtual devices to offer their capabilities or render a service to remote systems, the virtual devices need some form of communication bus. Because the virtual devices do not comprise physical interfaces, they need a mechanism for addressing other virtual devices as well as transporting data to the other devices, wherein the mechanism is referred to as a “virtual bus tunnel.” The virtual bus tunnel provides virtual devices the ability to communicate or coordinate their behaviors over extended physical distances such that a collection of devices working together do not necessarily have to be physically located near each other. Furthermore, the virtual bus tunnel allows for packet level communication among physical device elements and virtual devices addressable on the virtual bus tunnel. Communication over the virtual bus tunnel can also include direct communication from an application to virtual devices.
Normal physical buses suffer from bus contention where devices must vie for access to the bus. Virtual bus tunnels resolve contention by grouping virtual devices logically with a group ID such that multiple groups access the same physical transport media without contending for the bus by using the group ID within their packets. Each group ID represents a single logical group of virtual devices sharing a single bus with other logical groups each with their own ID. In addition, each virtual device is individually addressable via its own device ID. It is specifically contemplated that an Internet Protocol network embodies a virtual bus tunnel utilizing IP addresses as host addresses representing device IDs for virtual devices. Under such an embodiment, group IDs can be managed via IP multicasting addresses such that applications, users, or other devices that are members of the group perceive the group as a single device. Because the virtual bus tunnel can offer capabilities including quality of service management, it too can be considered a virtual device and can be addressed via a host address.
Peer-to-Peer Coordination
Virtual devices also need the ability to work together in a coordinated fashion to reach their full potential in providing aggregated capabilities or services. Peer-to-peer communication allows for devices to communicate directly with each other without using multicast where all devices within the multicast group “see” the traffic. Peer-to-peer coordination further provides for virtual devices to be aggregated into yet more powerful capabilities or services. In addition, it is contemplated that modules can be deployed into applications such that the applications have the ability to access the capabilities or services that virtual devices offer. Consequently, modules can be implemented as hardware systems attached to devices or software drivers installed on workstations running operating systems where the virtual devices within each module are programmed to work together in a peer-to-peer fashion.
The teachings herein can be advantageously employed by product developers to create systems where modules coupled with devices or components provide aggregated capabilities or services to multiple applications, users, or other devices in a distributed fashion. Devices that can be enhanced via such modules include hard disk drives, memory, monitors, patient sensors, point of sales scanners, office equipment, building automation sensors, media players, handheld systems, home entertainment systems, or computers. In addition, the modules can be integrated into software applications (if the module comprises all software components), formed into a rack-mount system containing numerous devices including hard disk drives, or single modules attached to a single device. Modules can even be sold as a stand alone product.
Various objects, features, aspects, and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments of the invention, along with the accompanying drawings in which like numerals represent like components.
Virtual devices communicate with remote systems via information packets over a virtual bus tunnel. The information packets originate from a source who wishes to address an entity and direct data to the entity, where an entity on virtual bus tunnel comprises a virtual device. The entity can appear as a single “device” from the perspective of the source.
Preferred types of information packets include packets utilizing a split-ID format, contiguous-ID format, or device format, where an ID comprises sufficient information for determining where the packets should go or how a packet should interact with a final target device.
IDs
Within the context of an information packet, packets comprise IDs that contain all necessary information for routing the packets to a correct entity comprising a module, object within a module, a device, or virtual device. IDs comprise at least three sub-IDs which include group IDs, device IDs, or target IDs. Group IDs identify a logical grouping of entities such that the logical group can be addressed via a single ID. Using an IP address as a group ID allows multiple entities to be addressed as if they are a single “device.” Device IDs identify specific entities in a group. It is contemplated that specific entities comprise non-physical objects including virtual devices. Target IDs identify a specific entity's capability or characteristic.
Split-ID Format
Although
Contiguous-ID Format
Information packets using a split-ID format represent a more preferred approach for addressing and for data transport using modules; however, other approaches are also possible.
Device Format
Information packets using device formats represent data packets received from a target device or sent to a target device. Contemplated packets are raw packets with no additional formatting added to them such that a target device perceives the packets as a naturally presented packet for the target device's interface. Information packets using device formats are device specific, consequently, they can vary from module to module depending on what types of target devices attach to the modules. In some embodiments, a module can be able to handle many different types of device formats directed toward a heterogeneous mix of target devices.
Aggregated Capabilities
Module Capabilities
Packet processor 520 comprises the capability for processing the various types of information packets including packets with split-ID, contiguous-ID, or device formats. In order to process the packets, packet processor further comprises virtual device manager 526 and virtual bus tunnel 524 both of which represent further functionality for packet handling. Both of these elements are under the command and control of packet processor 520. Virtual device manager 526 further comprises at least one of a possible plurality of virtual devices 528A through 528N. It is contemplated that the number of virtual devices can vary as shown depending on the requirements of module 500. Virtual devices 528A through 528N each have a bi-directional device data path 537A through 537N, respectively, providing access to target devices 540A through 540M. In addition, virtual devices 528A through 528N each have a bi-directional packet data path 525A through 525N respectively used to access virtual bus tunnel 524. Virtual bus tunnel 524 provides the virtual devices a communication path 517 to other modules, virtual devices, or system external to module 500 via network interface 510. Contemplated data paths within the module include Application Program Interfaces (APIs), memory copies, Inter-Process Communications (IPCs), physical connections, or other data transfer mechanisms.
Network Interface
Network interface 510 has responsibilities comprising providing network oriented data path 505 from module 500 to remote modules and systems that desire access to the capabilities or services offered by module 500 and the objects it manages. It is contemplated that preferred embodiments of network interface 510 comprise use of an internetworking protocol to manage communication. In more preferred embodiments of network interface 510, the internetworking protocol comprises IPv4 or IPv6. In addition, because module 500 can be placed in a general purpose computing system and function as driver level software, network interface 510 is contemplated to comprise an API that provides access to the general purpose computing system's network port. It is also contemplated communications with entities external to module 500 can be secured with respect to confidentiality, integrity, or authentication.
Packet Processor
Packet processor 520 represents a logical group of capabilities to handle information packet handling and to govern some of module 500 main aspects. In preferred embodiments, packet processors contains any combination of hardware, software, or firmware necessary to assign host addresses to various objects associated with module 500, to take action on the information packets passing through network interface 510 or target device interface 530, or to process information packets at rates up to the line-rate of network interface 510 or the line-rate of target device interface 530. Packet processor 520 assigns host addresses to module 500 objects including target devices 540A through 540M, virtual devices 528A through 528N, module 500 itself, virtual bus tunnel 524, logical groups of module objects, or other real or virtual objects associated with module 500. When packet processor 520 takes action on information packets traveling through module 500, information packets are converted among the various types of ID packets. The packets contain the necessary information for routing to other systems as well as the data for the other systems to properly process the packets. Therefore, specific goals or rules employed by module 500 govern the actions taken on the information packets. Preferred embodiments of packet processor 520 comprise the capability of communicating with other modules. Yet even more preferred embodiments of packet processor 520 comprise the capability of communicating with other modules, where the other modules are separated from module 500 by routers.
In order to facilitate communications with other modules, a preferred embodiment of packet processor 520 further comprises the ability to acquire a plurality of host addresses from internal module resources or from external modules resources. Once acquired, the host addresses can be assigned by packet processor 520 to various module objects. To further facilitate communications, preferred host addresses comprise routable addresses such that systems external to the module determine how to route the information packets containing the host addresses to a final destination. Furthermore, a more preferred version of packet processor 520 comprises a routable node with respect to the routable addresses. Such a packet processor comprises the capability of routing packets with host addresses to internal objects as well as to external objects. In addition host addresses can comprise addressable names used to reference objects, where contemplated names include names that can be used by DNS systems to resolve specific host addresses. Furthermore, for localized systems that do not have access to DNS, a local name resolution system is contemplated, preferably a peer-to-peer mechanism. More preferred host addresses comprise IPv4 or IPv6 addresses.
As a preferred packet processor 520 operates on information packets, it is contemplated that the results of taking action on the packets yield no out-going packets or at least one out-going packet. The case of no out-going packets provides for silently discarding packets not destined for any object within module 500. Contemplated uses for silently discarding include using IP multicasting to communicate with a logical group of objects providing an aggregate set of capabilities or services where information packets are sent to all objects in the group, but only one target object needs to respond and all others can silently discard the information packet. The case of at least one out-going packet provides for transforming from one type of packet to another and for routing packets to their final destination. An even more preferred embodiment of packet processor 520 provides capabilities comprising packets processing uni-directionally, multi-directionally, or for returning packets backward from whence they came. Contemplated uses for uni-directional processing include accepting information packets destined for target devices and transforming the packets to device format. The packets are then delivered directly to a target device. Contemplated uses for multi-directional processing include sending packets to a target device as well as sending a possible acknowledgement to a remote source. In addition, contemplated uses for returning backward packets include responding with an error if information packets can not be processed properly.
Packet processor 520 also comprises additional contemplated functionality including configuring target devices 540A through 540M, configuring target device interface 537, managing virtual bus tunnel 524, or managing virtual device manager 526. Packet processor 520 further comprises virtual bus tunnel 524 and virtual device manager 526. Virtual bus tunnel 524 provides a conduit for virtual devices 528A through 528N used for communication with other systems external to module 500. Virtual device manager 526 comprises sufficient software for controlling and managing virtual devices 528A through 528N. Both the virtual device manager and virtual bus tunnel will be discussed in more detail in following sections.
Target Device Interface
Target device interface 530 comprise a layer of functionality providing the logical blocks within module 500 access to target devices 540A through 540M. Contemplated target device interfaces include homogenous interfaces where all the target devices are of the same type or class, or heterogeneous interfaces where at least some of the target devices are different from the other target devices. Specifically contemplated homogenous interfaces include hard disk drive interfaces. Contemplated heterogeneous interfaces comprise various device level interfaces including DMA for low level devices, peripheral ports, APIs, or user interface ports.
Virtual Device Manager
Preferred versions of virtual device manager 526 comprise the capability for managing virtual devices 528A through 528N and functionality for mapping virtual devices to the capabilities or characteristics of target devices 540A through 540M. Within this context managing implies virtual device manger 526 has complete control over the existence or behavior of the virtual devices including the capability to create, destroy, modify, suspend, or resume operation of the virtual device.
Virtual Devices within a Module
Virtual devices 528A through 528N can be advantageously implemented through software constructs including programs, tasks, threads, processes, or monolithic coding systems. Each virtual device does not necessarily map to a target device in a one-to-one fashion. In addition, virtual devices do not necessarily have to correspond to any capability or characteristic of any of the target devices. In preferred embodiments at least one virtual device maps its functionality to a set of target devices including rotating or non-rotating media. Specifically contemplated non-rotating media includes all possible forms of memory. Specifically contemplated rotating media includes hard disk drives, optical drives, or other yet to be conceived rotating media. In the case of rotating media comprising hard disk drives, virtual devices 528A through 528N are contemplated to be logical partitions mapped to storage areas on the hard disk drive.
For modules to be deployed in existing computer systems including Windows® or Linux, preferred virtual devices are most advantageously implemented as drivers, threads, or processes. For self contained modules to be attached physically to target devices, virtual devices are advantageously implemented as tasks on embedded operating systems. In addition, it is contemplated that virtual devices can represent components other than target device capabilities including functions which can serve as remote event handlers for a decentralized computing system. Under such a contemplated virtual device, the host address associated with virtual devices becomes an event handler ID which can take the form of an IP address, port assignment, or memory address.
Module Implementation
A preferred network interface 610 comprises the use of Ethernet to exchange packets. Because modules are contemplated to exist in a wide variety of networking environments, wired or wireless interfaces are contemplated. Preferred wireless interfaces include all forms of 802.11, Ultra Wide Band (UWB), WiMAX, HomeRF, HiperLAN, Bluetooth, IrDA, or other wireless networking technologies existing now or in the future. Contemplated wired interfaces include Ethernet supporting various speeds including 10 Mbps, 100 Mbps, 1 Gbps, 10 Gbps, or higher speeds. In addition, contemplated network interface 610 can comprise more than one physical channel for various reasons including to increase performance, to provide out-of-band management, to provide out-of-band module to module communication similar to an internal bus or backplane, or to provide logical separation of decentralized capabilities or services.
Preferred embodiments of target device interface 630 include any combination of homogenous or heterogeneous wired interfaces including serial or parallel interfaces. In addition, target device interface 630 comprises the capability of interfacing to one or more target devices. Yet more preferred embodiments of target device interface 630 include serial interfaces including RS-232, RS-485, RS-422, USB, Firewire, Ethernet, USB, or even programmable IO pins for low level devices. Yet even more preferred serial interfaces include SATA, or fiber channel interfaces. Contemplated parallel interfaces include SCSI, PATA, PCI, or PCI-like interfaces. Beyond wired interfaces, it is also contemplated that target device interface 630 can employ wireless interfaces including Bluetooth, IrDA, or even 802.11 wireless communications. In some embodiments, target device interface 630 can comprise API to access functionality of a larger system.
When implemented on a hardware platform for use with physical target devices 640A through 640M, module 600 includes a number of other components in order to support the overall functionality of the module. Those ordinarily skilled in the art of embedded systems or software development will recognize the standard components usually employed when creating such modules. The components include an embedded real-time operating system, TCP/IP stack, file system, web server for use as a user interface, or other middleware. Security for confidentiality, integrity, or authentication can be handled through the use of protocols including IPSec, VPNs, SSL, SSH, RADIUS, Kerberos, or other security systems as they gain currency. Additional hardware components might also be necessary, including clocks, internal buses, transceivers, or others. When module 600 comprises all software for computing systems including Windows® or Linux, the module will interface to elements within the computing system via a set of APIs, where the hardware associated with system is part of a generalized computing platform. Specifically contemplated implementations of modules include programmable gate arrays, ASICs, embedded boards that attach directly to target devices, back planes, rack-mount systems, or even software components.
Virtual Bus Tunnel
Network 720 comprises the virtual bus tunnel within modules 712, 740A, and 740B. Each virtual bus tunnel provides an addressing mechanism and a data transport mechanism. The addressing mechanism comprises a number of capabilities: the capability of addressing logical groups of target devices or virtual devices such that they can be addressed by a single point address, the capability of addressing individual target devices or virtual devices, or the ability to address specific characteristics associated with target devices or virtual devices. The virtual bus tunnel does not necessarily tunnel device data because device formats from one module to another can vary widely. Group level addressing allows multiple logically separate systems to utilize the same physical transport media as a virtual bus. For example, if a target device happens to be a CPU, multiple users access the CPU via different group IDs, where each group ID represents one user's decentralized computing system. The data transport mechanism comprises that ability to transfer and route information packets from system to system until the information packets reach their final destination. A preferred virtual bus tunnel comprises an internetworking protocol which provides addressing capabilities and data transport capabilities. Specific contemplated internetworking protocols include IPv4 or IPv6. Even more preferred virtual bus tunnels comprise logical groups of objects within modules and optionally objects from other modules addressed via multicast IP addresses.
IP Multicasting—Mirrors, Stripes, and Spans
Within the example presented in
Although a preferred embodiment of multicast group 820 utilizes IP multicasting to address logical groups of virtual devices, alternative embodiments are possible. For example, hosts can employ broadcasts on a subnet to address a logical group wherein the broadcast packets include a group ID representing the logical group. Members in the group use the group ID to determine if the packet should be accepted. Even though this document contemplates the use of IP multicasting to provide group addressing, all possible alternative group addressing based on a group IDs are also contemplated, specifically contemplated group addressing includes the ability to route across subnets. Therefore, “multicast” includes the concept of alternative logical group addressing.
Multicast Spans
A logical group of logical partitions form a multicast span similar to that represented by multicast group 820. Modules, virtual devices, or target devices join a multicast group represented by a single IP address, or group ID, where virtual devices are logical partitions on target hard disk drives. The single IP address represents a group ID that identifies a single logical volume. As an example, a single logical group can form a single spanned logical volume, as called a multicast stripe, identified by the group ID. Data spans across the logical partitions spread across plurality of drives such that the total available storage capacity can be extended beyond that provided by a single drive. In a multicast span, LBAs are distributed sequential from 0 to a maximum number. Each logical partition is responsible for a range of LBAs and each subsequent logical partition continues the LBA numbering sequence where the previous logical partitions left off. Logical partitions can cover an entire drive; however, logical partitions in a span can be of any size.
When a host wishes to send or retrieve data from the multicast stripe, the host sends a request to the multicast group via packets with a split-ID or contiguous-ID format containing the IP address of the group as well as the LBA associated with the data. All virtual devices, or logical partitions, within the group receive the information packet via their modules. The logical partitions determine if they are responsible for that particular data block as determined by the LBA. If a logical partition is not responsible for the data block, the information packet is silently discarded. If a logical partition is responsible for the data block, the information packet format will be transformed. On a write, the packet format will be changed to device format and the data will be written to a target disk. On a read, an information packet will be read from a target disk and transformed to split-ID or contiguous-ID format then sent back to the originating host. A response information packet can contain data, an error, or an acknowledgement. The host expects one response to its request to a multicast stripe.
Multicast Stripes
Multicast stripes are similar to multicast spans with the exception that a number of logical partitions spread across numerous drives to provide increased performance when reading data from the drives. In this sense, multicast stripes are a super-set of multicast spans where the partitions can be any size, not just the size of the hard disk drive.
Referring to
Multicast Mirrors
Multicast mirrors can be constructed in a similar fashion as multicast spans and multicast stripes with the exception that there can be multiple logical groups that duplicate data. Consequently, a mirrored logical volume, also called a multicast mirror, comprises more than one set of LBAs numbered from 0 to a maximum value. Multicast group 820 can represent a multicast mirrored volume where some logical partitions mirror data from other logical partition. When a host sends information packets to the multicast mirror, more than one logical partition can be responsible for the data's LBA. Consequently, the host expects more than one response from the multicast mirror. Therefore, the main different between a multicast mirror and multicast stripe or span is that the originating host can expect more than one response to a request.
It is contemplated that multicast mirrors, stripes, or spans can all coexist with each other where each can have their own multicast IP address allowing them to provide services to numerous remote hosts. It is also contemplated that the partitions that make up multicast mirrors, stripes, or span communicate with other partitions within other modules. Such communication allows for each type of volume to work together to provide enhanced abilities including data redundancy.
RAID Structures
In especially preferred embodiments, multicast spans, stripes, or mirror form RAID-like structures. Multicast stripes represent RAID level 0 where data is striped across numerous physical drives. Multicast mirrors represent RAID level 1 where data is mirrored from a primary source to a mirrored volume. By combining multicast mirrors, stripes, or spans along with other RAID concepts including parity, more complex RAID structures are possible, RAID-5 for example. One ordinarily skilled in the art of RAID system is able to build such systems.
Virtual Bus Tunnel Method
Beginning at step 900, a module associates virtual devices with target device and their capabilities or characteristics. Virtual devices can be logical constructs that can be governed by software. Once the virtual devices are instantiated, whether in software or hardware, they are ready to begin performing their tasks and to communicate with other virtual devices. For example, in a distributed storage system logical partitions are associated with physical disk partitions. Once established, the logical partitions are able to able to participate with other virtual devices over a virtual bus tunnel.
At step 905 a module acquires multiple host addresses. The acquisition can be performed in a number of different possible ways. One contemplated method for acquiring host addresses includes using external module mechanisms including DHCP requests to gain a number of IP addresses from a DHCP server. Additional contemplated methods for acquiring IDs includes using internal module mechanisms including Auto-IP or having the host addresses pre-determined via initial configuration at manufacturing time or subsequent configuration during deployment. Host addresses combine with other ID information to eventually form split-IDs or contiguous-IDs.
At step 910 a module can assign the acquired host addresses to various objects associated with the module. Contemplated objects include real or virtual objects. Real objects include the module itself, interfaces, target devices, chassis, power supplies, or other physical devices that can be referenced by an external host. Virtual objects include virtual devices within the module, virtual bus tunnel, multicast mirrors, multicast stripes, multicast spans, logical volumes, logical groups, logical partitions, or other virtual systems providing capabilities or services to remote hosts.
At step 920 a module waits for information packets to arrive from any source including a target device interface, a network interface, virtual devices within the module, or other sources internal or external to the module. Contemplated information packets include packets with split-ID format, contiguous-ID format, or device format, where preferred embodiments use host addresses that comprise IP addresses.
At step 935 a module determines if an information packet has device format. If the packet does have device format, the module takes action on the packet at step 932 to transform the packet to use split-ID or contiguous-ID format. The transformation can be governed by a virtual device's functionality. Once the transform is complete, the packet is sent over a virtual bus tunnel at step 934. A preferred virtual bus tunnel in step 934 groups virtual device or target devices via multicasting groups to provide a single point contact for communication. Furthermore, a preferred media comprises Ethernet based media to transport packets over a network. Once the packet has been sent at step 934, the module again receives packets at 920.
If step 935 determines the information packet does not have device format, then at step 945 a module determines if the packet is an ID packet comprising a split-ID or contiguous-ID. If the packet is an ID packet, then step 955 determines if the packet is destined to travel to a target device. If so, the packet is transformed to a packet with device format at step 952 where all ID information and packet contents help determined the final transformation. At step 954, the final packet is sent to the target device. If at step 955 the packet is not destined for a target device, then the module determines if the packet is silently discarded or is transformed at step 957. If step 957 determines the packet must be silently discarded, as in the case of multicasting mirroring, striping, or spanning, then the packet is discarded and the module again waits to receive new packets back at step 920. Otherwise, if step 957 determines the packet must not be silently discarded, the packet is further transformed to a different ID format 956 and sent on via step 934. A contemplated use for this type of transform includes a virtual device representing a multicast stripe that can copy data directly to a multicast mirror without having a source host perform multiple operations to create data redundancy.
If at step 945 the received packet is not an information packet from the device or has a split-ID or contiguous-ID, the module processes 946 the packet in a manner that is determined by the programming of the module. Contemplated additional processing includes handling configuration information for target devices, managing virtual devices, updating firmware, or other support processing.
One ordinarily skilled in the art of developing software or embedded systems will recognize the above outlined method and module can be implemented in many different ways in order to optimize various aspects of a module. Implementations include providing a single monolithic processing image to ensure very fast processing or providing multiple threads or tasks that replicate the method presented above for each target device or for each virtual device to simplify the implementation.
Example Use—Data Storage
Specific data storage examples have already been presented; however, a more focused look at employing modules within data storage will help clarify a possible embodiment of the presented material.
Data storage products can be advantageously developed based on modules and hard disk drives. A possible goal is to create a network storage system. Users and applications can access the data store where data is spread over multiple hard disks located remotely from the user over a network in a manner that appears to be a single drive.
At least two types of modules are contemplated. One type attaches directly to the set of hard disk drives, and establishes a set of logical partitions all addressable via IP addresses. The logical partitions then can form a multicast group that presents the collection as a single disk volume to remote hosts. The second type is a software driver installed on a computer system. The computer's module presents its target device interface to the computer's file system. In addition, it is also is able to communicate to the multicast group. Under this case, split-IDs comprise IP addresses and LBAs. The virtual bus tunnel comprises the use UDP datagrams over an IP based Ethernet network. If the modules are constructed properly, then the system can process data at line-rates with respect to the network or the disk drive interfaces.
Consumer oriented systems can be advantageously developed where one or two drives have their capabilities aggregated. A consumer installs a module drive on their PC that can connect to the modules for the drives. Most consumer NAS products suffer from performance problems; however, a module based version allows for perform at line-rates. For enterprise level system, many drives can be combined within a rack-mount enclosure where all drives can be combined in numerous ways to enhance performance, reliability, or scalability. Because the module is virtualized, a user will have unprecedented control over their storage configuration.
The advantage this approach has over traditional NAS and SAN approaches are several fold. First, the module approach utilizes existing network infrastructure and does not need additional specialized controllers or equipment based on fiber channel or SCSI. Second, network storage systems can be developed on less expensive SATA drives reducing the final costs to an end user. Third, by applying various arrangements of multicast minors, stripes, and span data redundancy and reliability can be increased over existing RAID systems due to automatic data redundancy where stripes and mirrors can communicate directly with each other. Finally, because the system utilizes a virtual bus tunnel capable of transporting data anywhere in the work, a user has access to their complete computing experience.
Example Use—Medial Environments
In order to illustrate the broad applicability of virtual devices and virtual bus tunnels an additional example is provided beyond data storage. In the medical environment, a patient can be monitored through the use of virtual bus tunnels created via modules. Under such circumstances modules can take on many forms including adapters to specific equipment, embedded modules within medical devices, or even software database adapters.
Patient monitoring equipment can include blood analyzers, sphygmomanometers, fetal monitors, or other devices attached to patients. Modules can combine to form a multicast group whose address represents a patient ID. Consequently, when any application, doctor, or nurse requires patient data the request is sent to the multicast group and only the relevant device responds. As wireless sensors are deployed in a hospital environment, the need will grow for patient-centric monitoring rather than room-centric monitoring. A module based system has the advantage of allowing all sensors to move with the patient and remain accessible rather than being accessible only in a room. Additionally, modules can take the form of database adapters implemented completely in software where the database adapter module continually updates databases as patient telemetry is received.
As shown with the above examples, the decentralization and virtualization of computing resources offers the ability for users to maintain existing experiences no matter the physical locality of those resources. Although the examples show single applications, all possible extensions to the applicability of modules are contemplated including, but not limited to, the use of modules to provide fully decentralized computing systems where CPUs, drives, memory, or other components are separated from each other. Such a virtual computing system allows a user to access their computer experience from anywhere in the world.
Thus, specific compositions and methods virtual device communications, information packets, and virtual bus tunnels have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the disclosure. Moreover, in interpreting the disclosure all terms should be interpreted in the broadest possible manner consistent with the context. In particular the terms “comprises” and “comprising” should be interpreted as referring to the elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps can be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.
Number | Name | Date | Kind |
---|---|---|---|
4422171 | Wortley | Dec 1983 | A |
4890227 | Watanabe | Dec 1989 | A |
5129088 | Auslander | Jul 1992 | A |
5193171 | Shinmura | Mar 1993 | A |
5444709 | Riddle | Aug 1995 | A |
5506969 | Wall | Apr 1996 | A |
5546541 | Drew | Aug 1996 | A |
5590124 | Robins | Dec 1996 | A |
5590276 | Andrews | Dec 1996 | A |
5634111 | Oeda | May 1997 | A |
5742604 | Edsall | Apr 1998 | A |
5758050 | Brady | May 1998 | A |
5771354 | Crawford | Jun 1998 | A |
5850449 | McManis | Dec 1998 | A |
5867686 | Conner | Feb 1999 | A |
5884038 | Kapoor | Mar 1999 | A |
5889935 | Ofek | Mar 1999 | A |
5930786 | Carino, Jr. | Jul 1999 | A |
5937169 | Connery | Aug 1999 | A |
5948062 | Tzelnic | Sep 1999 | A |
5949977 | Hernandez | Sep 1999 | A |
5983024 | Fye | Nov 1999 | A |
5991891 | Hahn | Nov 1999 | A |
6018779 | Blumenau | Jan 2000 | A |
6081879 | Arnott | Jun 2000 | A |
6101559 | Schultz | Aug 2000 | A |
6105122 | Muller | Aug 2000 | A |
6128664 | Yanagidate | Oct 2000 | A |
6157935 | Tran | Dec 2000 | A |
6157955 | Narad | Dec 2000 | A |
6181927 | Welling | Jan 2001 | B1 |
6202060 | Tran | Mar 2001 | B1 |
6246683 | Connery | Jun 2001 | B1 |
6253273 | Blumenau | Jun 2001 | B1 |
6259448 | McNally | Jul 2001 | B1 |
6275898 | DeKoning | Aug 2001 | B1 |
6288716 | Humpleman | Sep 2001 | B1 |
6295584 | DeSota | Sep 2001 | B1 |
6330236 | Ofek | Dec 2001 | B1 |
6330615 | Gioquindo | Dec 2001 | B1 |
6330616 | Gioquindo | Dec 2001 | B1 |
6377990 | Slemmer | Apr 2002 | B1 |
6385638 | Baker-Harvey | May 2002 | B1 |
6389448 | Primak | May 2002 | B1 |
6396480 | Schindler | May 2002 | B1 |
6401183 | Rafizadeh | Jun 2002 | B1 |
6434683 | West | Aug 2002 | B1 |
6449607 | Tomita | Sep 2002 | B1 |
6466571 | Dynarski | Oct 2002 | B1 |
6470342 | Gondi | Oct 2002 | B1 |
6473774 | Cellis | Oct 2002 | B1 |
6480934 | Hino | Nov 2002 | B1 |
6487555 | Bharat | Nov 2002 | B1 |
6549983 | Han | Apr 2003 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6567863 | Lafuite | May 2003 | B1 |
6597680 | Lindskog | Jul 2003 | B1 |
6601101 | Lee | Jul 2003 | B1 |
6601135 | McBrearty | Jul 2003 | B1 |
6618743 | Bennett | Sep 2003 | B1 |
6629162 | Arndt | Sep 2003 | B1 |
6629178 | Smith | Sep 2003 | B1 |
6629264 | Sicola | Sep 2003 | B1 |
6636958 | Abboud | Oct 2003 | B2 |
6678241 | Gai | Jan 2004 | B1 |
6681244 | Cross | Jan 2004 | B1 |
6683883 | Czeiger | Jan 2004 | B1 |
6693912 | Wang | Feb 2004 | B1 |
6701431 | Subramanian | Mar 2004 | B2 |
6701432 | Deng | Mar 2004 | B1 |
6710786 | Jacobs | Mar 2004 | B1 |
6711164 | Lee | Mar 2004 | B1 |
6728210 | El-Khoury | Apr 2004 | B1 |
6732171 | Hayden | May 2004 | B2 |
6732230 | Davis | May 2004 | B1 |
6741554 | D'Amico | May 2004 | B2 |
6742034 | Schubert | May 2004 | B1 |
6754662 | Li | Jun 2004 | B1 |
6757845 | Bruce | Jun 2004 | B2 |
6772161 | Mahalingam | Aug 2004 | B2 |
6775672 | Mahalingam | Aug 2004 | B2 |
6775673 | Mahalingam | Aug 2004 | B2 |
6795534 | Noguchi | Sep 2004 | B2 |
6799244 | Tanaka | Sep 2004 | B2 |
6799255 | Blumenau | Sep 2004 | B1 |
6826613 | Wang et al. | Nov 2004 | B1 |
6834326 | Wang | Dec 2004 | B1 |
6853382 | Van Dyke | Feb 2005 | B1 |
6854021 | Schmidt | Feb 2005 | B1 |
6862606 | Major | Mar 2005 | B1 |
6876657 | Palmer | Apr 2005 | B1 |
6882637 | Le | Apr 2005 | B1 |
6886035 | Wolff | Apr 2005 | B2 |
6894976 | Banga | May 2005 | B1 |
6895461 | Thompson | May 2005 | B1 |
6895511 | Borsato | May 2005 | B1 |
6901497 | Tashiro | May 2005 | B2 |
6904470 | Ofer | Jun 2005 | B1 |
6907473 | Schmidt | Jun 2005 | B2 |
6912622 | Miller | Jun 2005 | B2 |
6917616 | Normand | Jul 2005 | B1 |
6922688 | Frey, Jr. | Jul 2005 | B1 |
6928473 | Sundaram | Aug 2005 | B1 |
6934799 | Acharya | Aug 2005 | B2 |
6941555 | Jacobs | Sep 2005 | B2 |
6947430 | Bilic | Sep 2005 | B2 |
6977927 | Bates | Dec 2005 | B1 |
6983326 | Vigue | Jan 2006 | B1 |
6985956 | Luke | Jan 2006 | B2 |
6993587 | Basani | Jan 2006 | B1 |
7039934 | Terakado | May 2006 | B2 |
7051087 | Bahl | May 2006 | B1 |
7065579 | Traversat | Jun 2006 | B2 |
7069295 | Sutherland | Jun 2006 | B2 |
7072823 | Athanas | Jul 2006 | B2 |
7072986 | Kitamura | Jul 2006 | B2 |
7073090 | Yanai | Jul 2006 | B2 |
7111303 | Macchiano et al. | Sep 2006 | B2 |
7120666 | McCanne | Oct 2006 | B2 |
7145866 | Ting et al. | Dec 2006 | B1 |
7146427 | Delaney | Dec 2006 | B2 |
7149769 | Lubbers | Dec 2006 | B2 |
7152069 | Santry | Dec 2006 | B1 |
7181521 | Knauerhase | Feb 2007 | B2 |
7184424 | Frank | Feb 2007 | B2 |
7188194 | Kuik | Mar 2007 | B1 |
7200641 | Throop | Apr 2007 | B1 |
7203730 | Meyer | Apr 2007 | B1 |
7206805 | Mclaughlin | Apr 2007 | B1 |
7225243 | Wilson | May 2007 | B1 |
7237036 | Boucher | Jun 2007 | B2 |
7243144 | Miyake | Jul 2007 | B2 |
7254620 | Iwamura | Aug 2007 | B2 |
7260638 | Crosbie | Aug 2007 | B2 |
7263108 | Kizhepat | Aug 2007 | B2 |
7278142 | Bandhole | Oct 2007 | B2 |
7296050 | Vicard | Nov 2007 | B2 |
7333451 | Khalil | Feb 2008 | B1 |
7353266 | Bracewell | Apr 2008 | B2 |
7389358 | Matthews et al. | Jun 2008 | B1 |
7404000 | Lolayekar | Jul 2008 | B2 |
7406523 | Kruy | Jul 2008 | B1 |
7415018 | Jones | Aug 2008 | B2 |
7421736 | Mukherjee et al. | Sep 2008 | B2 |
7428584 | Yamamoto | Sep 2008 | B2 |
7436789 | Caliskan | Oct 2008 | B2 |
7447209 | Jeffay | Nov 2008 | B2 |
7463582 | Kelly | Dec 2008 | B2 |
7475124 | Jiang | Jan 2009 | B2 |
7526577 | Pinkerton | Apr 2009 | B2 |
7535913 | Minami | May 2009 | B2 |
7536525 | Chandrasekaran et al. | May 2009 | B2 |
7558264 | Lolayekar | Jul 2009 | B1 |
7707304 | Lolayekar | Apr 2010 | B1 |
7912059 | Squire | Mar 2011 | B1 |
20010020273 | Murakawa | Sep 2001 | A1 |
20010026550 | Kobayashi | Oct 2001 | A1 |
20010034758 | Kikinis | Oct 2001 | A1 |
20010049739 | Wakayama | Dec 2001 | A1 |
20020016811 | Gall | Feb 2002 | A1 |
20020026558 | Reuter | Feb 2002 | A1 |
20020029256 | Zintel | Mar 2002 | A1 |
20020029286 | Gioquindo et al. | Mar 2002 | A1 |
20020031086 | Welin | Mar 2002 | A1 |
20020035621 | Zintel | Mar 2002 | A1 |
20020039196 | Chiarabini | Apr 2002 | A1 |
20020052962 | Cherkasova | May 2002 | A1 |
20020062387 | Yatziv | May 2002 | A1 |
20020065875 | Bracewell | May 2002 | A1 |
20020087811 | Khare | Jul 2002 | A1 |
20020091830 | Muramatsu | Jul 2002 | A1 |
20020126658 | Yamashita | Sep 2002 | A1 |
20020133539 | Monday | Sep 2002 | A1 |
20020138628 | Tingley et al. | Sep 2002 | A1 |
20020165978 | Chui | Nov 2002 | A1 |
20030018784 | Lette | Jan 2003 | A1 |
20030023811 | Kim | Jan 2003 | A1 |
20030026246 | Huang | Feb 2003 | A1 |
20030041138 | Kampe | Feb 2003 | A1 |
20030065733 | Pecone | Apr 2003 | A1 |
20030069995 | Fayette | Apr 2003 | A1 |
20030070144 | Schnelle | Apr 2003 | A1 |
20030079018 | Lolayekar et al. | Apr 2003 | A1 |
20030081592 | Krishnarajah | May 2003 | A1 |
20030093567 | Lolayekar | May 2003 | A1 |
20030101239 | Ishizaki | May 2003 | A1 |
20030118053 | Edsall | Jun 2003 | A1 |
20030130986 | Tamer | Jul 2003 | A1 |
20030161312 | Brown | Aug 2003 | A1 |
20030172149 | Edsall et al. | Sep 2003 | A1 |
20030172157 | Wright | Sep 2003 | A1 |
20030182349 | Leong | Sep 2003 | A1 |
20030202510 | Witkowski | Oct 2003 | A1 |
20030204611 | McCosh | Oct 2003 | A1 |
20040047367 | Mammen | Mar 2004 | A1 |
20040078465 | Coates | Apr 2004 | A1 |
20040088293 | Daggett | May 2004 | A1 |
20040100952 | Boucher | May 2004 | A1 |
20040148380 | Meyer et al. | Jul 2004 | A1 |
20040160975 | Frank | Aug 2004 | A1 |
20040162914 | St. Pierre et al. | Aug 2004 | A1 |
20040170175 | Frank | Sep 2004 | A1 |
20040181476 | Smith | Sep 2004 | A1 |
20040184455 | Lin | Sep 2004 | A1 |
20040213226 | Frank | Oct 2004 | A1 |
20050033740 | Cao | Feb 2005 | A1 |
20050044199 | Shiga et al. | Feb 2005 | A1 |
20050058131 | Samuels | Mar 2005 | A1 |
20050102522 | Kanda | May 2005 | A1 |
20050144199 | Hayden | Jun 2005 | A2 |
20050166022 | Watanabe | Jul 2005 | A1 |
20050175005 | Brown | Aug 2005 | A1 |
20050198371 | Smith | Sep 2005 | A1 |
20050246401 | Edwards | Nov 2005 | A1 |
20050259646 | Smith et al. | Nov 2005 | A1 |
20050267929 | Kitamura | Dec 2005 | A1 |
20050270856 | Earhart | Dec 2005 | A1 |
20050286517 | Babbar | Dec 2005 | A1 |
20060029068 | Frank | Feb 2006 | A1 |
20060029069 | Frank | Feb 2006 | A1 |
20060029070 | Frank | Feb 2006 | A1 |
20060036602 | Unangst | Feb 2006 | A1 |
20060045089 | Bacher et al. | Mar 2006 | A1 |
20060077902 | Kannan | Apr 2006 | A1 |
20060101130 | Adams | May 2006 | A1 |
20060126666 | Frank | Jun 2006 | A1 |
20060133365 | Manjunatha | Jun 2006 | A1 |
20060168345 | Siles | Jul 2006 | A1 |
20060176903 | Coulier | Aug 2006 | A1 |
20060182107 | Frank | Aug 2006 | A1 |
20060206662 | Ludwig | Sep 2006 | A1 |
20060253543 | Frank | Nov 2006 | A1 |
20060272015 | Frank | Nov 2006 | A1 |
20070043771 | Ludwig | Feb 2007 | A1 |
20070083662 | Adams | Apr 2007 | A1 |
20070101023 | Chhabra | May 2007 | A1 |
20070110047 | Kim | May 2007 | A1 |
20070147347 | Ristock | Jun 2007 | A1 |
20070168396 | Adams | Jul 2007 | A1 |
20070237157 | Frank | Oct 2007 | A1 |
20080181158 | Bouazizi | Jul 2008 | A1 |
20080279106 | Goodfellow | Nov 2008 | A1 |
Number | Date | Country |
---|---|---|
1359214 | Jul 2002 | CN |
0485110 | May 1992 | EP |
0654736 | May 1995 | EP |
0700231 | Mar 1996 | EP |
0706113 | Apr 1996 | EP |
63090942 | Apr 1988 | JP |
09149060 | Jun 1997 | JP |
10-333839 | Dec 1998 | JP |
2000267979 | Sep 2000 | JP |
2001-094987 | Apr 2001 | JP |
2002-252880 | Dec 2001 | JP |
2001359200 | Dec 2001 | JP |
2002318725 | Oct 2002 | JP |
2004056728 | Feb 2004 | JP |
223167 | Nov 2004 | TW |
WO0101270 | Apr 2001 | WO |
WO0215018 | Feb 2002 | WO |
WO02071775 | Sep 2002 | WO |
WO2004025477 | Mar 2004 | WO |
Entry |
---|
Anderson, et al., “Serverless Network File Systems,” In Proceedings of the 15th Symposium on Operating Systems Principles, Dec. 1995. |
Bruschi, et al., “Secure multicast in wireless networks of mobile hosts: protocols and issues”, Mobile Networks and Applications, vol. 7, issue 6 (Dec. 2002), pp. 503-511. |
Chavez, A Multi-Agent System for Distributed Resource Allocation, MIT Media Lab, XP-002092534, Int'l Conference on Autonomous Agents, Proceedings of the First Int'l Conference on Autonomous Agents, Marina del Rey, California, US, Year of Publication: 1997. |
IBM Technical Disclosure Bulletin, Vo. 35, No. 4a, pp. 404-405, XP000314813, Armonk, NY, USA, Sep. 1992. |
International Search Report and Written Opinion re PCTUS05/01542 dated Aug. 25, 2008. |
International Search Report for Application No. PCT/US02/40205 dated May 12, 2003. |
International Search Report re PCT/US2005/018907 dated Jan. 11, 2006. |
Kim et al., “Internet Multicast Provisioning Issues for Hierarchical Architecture”, Dept of Computer Science, Chung-Nam National University, Daejeon, Korea, Ninth IEEE International Conference, pp. 401-404., IEEE, published Oct. 12, 2001. |
Lee et al. “A Comparison of Two Distributed Desk Systems” Digital Systems Research Center—Research Report SRC-155, Apr. 30, 1998, XP002368118. |
Lee, et al. “Petal: Distributed Virtual Disks”, 7th International Conference on Architectural Support for Programming Languages and Operation Systems. Cambridge, MA., Oct. 1-5, 1996. International Conference on Architectural Support for Programming Languages and Operation Systems (ASPLOS), New, vol. Conf. 7, pp. 84-92, XP000681711, ISBN: 0-89791-767-7, Oct. 1, 1996. |
Lin, et al., “RMTP: A Reliable Multicast Transport Protocol,” Proceedings of IEEE INFOCOM '96, vol. 3, pp. 1414-1424, 1996. |
Office Action re U.S. Appl. No. 11/139,206 dated Dec. 9, 2008. |
Quinn et al., “IP Multicast Applications: Challenges and Solutions,” Network Working Group, RFC 3170, Sep. 2001. |
Satran et al. “Internet Small Computer Systems Interface (iSCSI)” IETF Standard, Internet Engineering Task Force, IETF, CH, XP015009500, ISSN: 000-0003, Apr. 2004. |
VMWare Workstations User's Manual, Version 3.2, VMWare, Inc., Copyright 1998-2002. |
Beck, Micah, et al., An End-to-End Approach for Globally Scalable network Storage, ACM SIGCOMM Computer Communication Review; vol. 32, Issue 4, Proceedings of the 2002 SIGCOMM Conference; pp. 339-346; Oct. 2002. |
Gibson, Garth; A Cost Effective High-Bandwidth Storage Architecture; ACM SIGOPS Operating Systems Review, col. 32, issue 5, pp. 92-103; 1998. |
Gibson, Garth; File Server Scaling with Network-Attached Secure Disks; Joint Int'l Conference on Measurement & Modeling of Computer Systems Proceedings of the 1997 ACM SIGMETRICS Int'l Conference on Measurement & Modeling of Computer Systems; pp. 272-284; 1997. |
Robinson, Chad; The Guide to Virtual Services; Linux Journal, vol. 1997 Issue 35; Mar. 1997. |
Virtual Web mini-HOWTO; Parag Mehta; www.faqs.or/docs/Linux-mini/Virtual-Web.html; Jun. 6, 2001. |
VMWare Workstation User's Manual, VMWare, Inc., p. 1-420, XP002443319; www.vmware.com/pdf/ms32—manual.pdf; p. 18-21; p. 214-216; p. 273-282; copyright 1998-2002. |
WebTen User's Guide; Version 3.0, Jan. 2000; http://www.tenan.com/products/webten/WebTenUserGuide/1—Introduction.html; Jan. 2000. |
WebTen User's Guide; Version 7.0; http://www.tenon.com/products/webten/WebTenUserGuide/8—VirtualHosts.html, Chapter 8; Mar. 2008. |
Satran et al., “Internet Small Computer Systems Interface (iSCSI)” Internet Draft draft-ietf-ips-iscsi-19.txt, Nov. 3, 2002. |
Notice of Allowance re U.S. Appl. No. 11/139,206 dated Jul. 13, 2009. |
International Search Report/Written Opinion re PCT/US05/18907 dated Jan. 11, 2006. |
Notice of Allowance re Taiwan Application No. 94127547 dated Sep. 4, 2007. |
Number | Date | Country | |
---|---|---|---|
20100095023 A1 | Apr 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11139206 | May 2005 | US |
Child | 12574622 | US |