The disclosure generally relates to distributed storage systems for distributing data, such as, e.g., streaming video data.
Background information relating to the subject matter disclosed herein may be found in the following references:
A shared-storage system for use in a complex of video servers is based on the concept of cached block maps. The system may enable multiple video servers to cooperatively stream assets from a common pool of storage in a server complex while reducing the overhead and complexity of distributed file systems. The system allows a common pool of assets to appear as local files on video servers, and transparently redirect streaming read requests to the storage devices through a storage area network (SAN), e.g., such as a Fibre Channel storage area network (SAN).
In addition, a highly scalable complex of video servers may be based on the concept of tiered caching. The system may enable a multitude of video servers to cooperatively stream assets from a common pool of storage in a distributed complex. The system may allow storage bandwidth, storage capacity and streaming bandwidth to be associated with end user requirements.
According to one embodiment, a distributed storage system for streaming data may include a controller, a first computer and a second computer, first and second switches, and a storage device. The first computer may include a local file system and may use the local file system to store asset files on the first storage device. In addition, the first computer may employ a process to create a block map, including information concerning boundaries where an asset file is stored on the first storage device. A block map may be created for each asset file.
According to another embodiment, a tiered caching system may be employed for streaming digital assets. An exemplary implementation of such a system includes a third tier cache memory that stores the asset, and a plurality of video pumps coupled to the third tier cache. Each video pump may include a second tier cache memory that receives a copy of the asset from the third tier cache memory and emits one or more streams, and a first tier cache memory that receives a copy of the asset from the second tier cache memory and emits a plurality of streams. The system may also include a resource controller that chooses a video pump from the plurality of video pumps to stream the asset.
The program or process 12A may store a copy of the block map on a second storage device 14A coupled to the second computer 14. As shown, the second computer 14 may be also coupled to the first storage device 20.
In addition, the system may include a virtual file system 14B that enables the second computer 14 to access the asset files on the first storage device 20 using the copies of the block maps stored in storage device 14A.
A block placement algorithm may be employed by the local file system of the first computer 12 and may write multiple local file system blocks contiguously when the first computer stores an asset file. In addition, the switch 18 may provide concurrent, non-blocking access between the first computer 12, the second computer 14, and the storage device 20. The process 12A may create a “hint file” (discussed below) including pointers to locations in the asset file(s).
A method for reading data from a first file stored on a first storage device includes storing a block map in a second file, where the block map may include a list of logical block addresses and each logical block address in the list may identify a sector used to store data in the first file. This method also may include issuing a system call to read data from a virtual file associated with the first file, retrieving a logical block address associated with the data, and reading data from the first storage device using the associated logical block address.
Another aspect of the illustrative system relates to a tiered caching system for streaming digital assets. An exemplary embodiment of such a system may include a third tier cache memory that stores an asset. This could be, e.g., the storage device 20 of
Distributed Storage Architecture
A content writer 12 manages one or more file system volumes from external Fibre Channel storage arrays 20. The storage 20 may be mounted on the content writer 12 as a local file system. The system may be configured such that no other component in the complex directly mounts the file system, but the architecture enables video pumps 14 to stream from the storage 20 just as if it was mounted locally on each video pump. One or more of the file systems may include a single LUN (Logical Unit Number) that resides on a RAID storage array.
Ingest, or asset loading, may be performed by the content writers 12. The assets being ingested may be written to the content writer's local file system. The resource controller 10 may direct the distribution of assets across file systems, ensuring that assets are loaded uniformly and randomly across the storage arrays.
The number c of content writers 12 in a complex may be determined by the total ingest capacity required. Since each content writer has a fixed maximum ingest capacity, c is simply the total desired ingest capacity of the complex divided by the capacity of each content writer.
The number of video pumps 14, v, may be determined by the number of streams to be served from the complex and is simply the total number of desired streams divided by the capacity of each video pump.
The number of storage arrays 20, s, may be determined by (1) the maximum storage capacity requirements, (2) the unique, or non-cached, streaming requirements of the complex and/or (3) the bandwidth available from each array. Statistical techniques may be used to determine, with high probability, the maximum percentage of the load that will fall on the most heavily loaded array at any given time.
Block Map Caching
The shared storage architecture enables the video pumps 14 to stream from the content writers' local file systems. It does this using a block map (bmap) caching mechanism that enables the content writers 12 to divulge the locations of asset data blocks to the video pumps 14. The video pumps 14 may then be able to read the data blocks directly via the Fibre Channel switch 18 and stream from them. The block maps for or more asset in the system are cached on a local file system on one or more video pump for the life of the asset. A content syncher process running on each video pump ensures that the block map cache remains consistent with the state of the assets on the content writers 12. Persistently caching the block maps and hint files for assets on the video pump(s) may enable streaming to continue in the event of content writer failure.
A new file system layer called BCFS, the Block map Cache File System, implements the block map-to-asset-data lookup transparently to applications while streaming. In addition to the assets, hint files are required for streaming. The hint files may be generated on the content writers' local storage during the ingestion process. The hint files may be propagated to the video pumps 14 with the bmaps and similarly stored on a local file system for the life of the asset. Alternatively, block maps for the hint files may be propagated to the video pumps 14, which enables the files to be accessed in a similar manner to the assets and requires fewer local storage and network resources, but does introduce additional delays if the hint file data is needed during asset ingestion.
Block Map Cache File System (BCFS)
BCFS may be a thin file system layer that presents a transparent interface to user level applications, enabling them to open and stream from assets on Tier 3 storage as if they were mounted locally.
BCFS is not a file store, meaning that it does not implement any on-disk data structures. It may use an underlying UFS file system to persist all asset block maps and hint files.
BCFS may be based on the concept of a stackable vnode interface (see, E. Zadok, et al., “Extending File Systems Using Stackable Templates,” in Proc. 1999 USENIX Annual Technical Conf., June 1999). A virtual node, or vnode, may be a data structure used within the Unix kernel to represent entities such as open files or directories that appear in the file system namespace. A vnode may be independent of the physical characteristics of the underlying operating system. The vnode interface provides a uniform way for higher level kernel modules to perform operations on vnodes. The virtual file system (VFS) implements common file system code that may include the vnode interface.
The vnode interface supports a concept known as stacking, in which file system functions may be modularized by allowing one vnode interface implementation to call another. Vnode stacking allows multiple file system implementations to exist and call each other in sequence. In a stackable vnode implementation, an operation at a given level of the stack may invoke the same operation at the next lower level in the stack.
Some considerations in designing BCFS are:
The FFS file system may be implemented such that it is not a shared file system: for example, it may assume that there is a single server reading or writing mounted storage. Based on this assumption, it may be able to cache inode and indirect block metadata in its buffer cache. There are no mechanisms for synchronizing this cache with other servers when file system metadata changes as a result of writing or deleting asset files. If an asset's metadata is cached in server A and server B writes to the asset thereby changing the metadata, server A will not know about the new metadata. While it is conceptually possible to add buffer cache synchronization to FFS, doing so would complicate and could potentially destabilize the file system component. Propagating block maps avoids the cache coherency problem by publishing the current state of the block maps to all servers. Note that this scheme might not work well in a general purpose shared file system due to the overhead of bmap communication. However, in a video pump complex characterized by: (1) a single writer per asset, (2) assets being written once and streamed many times, and (3) a large asset block size resulting in a compact bmap representation, this approach is quite efficient.
The VFS stack may be established by mounting the BCFS layer on an existing lower layer. For example, mount −t bcfs/localfs/assets/cache mounts a block cache file system/localfs at a mount point “/assets/cache.” All accesses to files in /assets/cache will now pass through the BCFS module to the underlying local file system, which contains copies of the block map and hint files. The blockmap files act as proxies for the remote files on Tier 3 storage. The content syncher process arranges for the local blockmap files to have names that are the same (or optionally with a “.bmap” extension appended) as the names of the actual asset files on Tier 3 storage.
As an example of one VFS implementation, this exemplary VFS implementation would show a listing of assets on the remote storage rather than the local files. This could be an important feature for a general purpose shared file system, but bcfs was designed to specifically address the problem of providing high performance shared access to assets with minimal overhead. Providing distributed directory services could require the bmap files (or a related structure) to have sufficient information to access the metadata describing directory entries and inodes on the shared storage. Propagating this information to the video pumps 14 would require provisions for maintaining cache coherency and locking which would add additional overhead. More importantly, these metadata accesses would contend with streaming accesses for service by the Storage Array 20 and could therefore degrade streaming performance. A burst of reads generated from a simple 1s command could cause disruption in stream delivery. As designed, the only requests made to the Storage Array 20 by a VP during streaming are for the stream data, not for any metadata.
Block Map Definition
This section describes, in an embodiment, what may be implied by the term Block Map or bmap. As shown in
A block may be completely specified by its logical block address (LBA) and length. A block map then is a sequence of blocks, where an LBA and a length may be used to identify a block.
In order to achieve high levels of throughput from disk drives, blocks should be large to amortize the cost of a seek (moving the disk head to the start of the block) plus latency (waiting for the platter to rotate until the data is under the disk head) over a large data transfer. For the current generation of Fibre Channel disk drives, block sizes in the range 512K to 2 MB may provide 50%-80% of the drives sustainable throughput on contiguous data.
Since a single 32-bit LBA and 16-bit length are sufficient to describe a block on a Fibre Channel device, the ratio of an asset's size to its block map's size may be equal to at most (block size/6):1. To the extent that blocks are placed contiguously by the file system, the block map's size may be additionally reduced.
Based on a 1 MB contiguous block alignment, the ratio between asset size and the block map size will be 167,000:1 and typically much smaller due to FFS placing blocks contiguously where possible. A block map for a 1 GB asset, for example, would be at most 6 KB.
The file system block size is not determined by the Storage Array; rather, it may be determined by the file system running on the video pump. The Storage Array operates in terms of logical block addresses (LBAs), where each block referenced by an LBA is typically 512 bytes (the device's “sector size”). The file system addresses blocks of data in units of file system block size. The maximum file system block size for FFS is 64K. Since this was not large enough to achieve throughput targets, the file system's block placement algorithm may be modified to guarantee that file system blocks would be placed contiguously in multiples of the asset block size. A single asset block size may be used for all assets in a given file system, to avoid issues with fragmentation.
Block Map File Format
This section describes a sample format for a bmap file. The bmap file may include a header followed by the list of block descriptors.
Header
The bmap file header may include a version number and the SCSI Target and LUN ID of the storage device.
The “disk slice” is a FreeBSD concept similar to a partition that allows a drive to be divided into separate regions or “slices”. Slices enable a server to boot multiple operating systems. Each OS resides in a separate slice. Within a slice, multiple partitions may be defined, where each partition corresponds to a logical disk (e.g., “a:”, “b:”, “c:”, . . . ). When formatting Tier 3 storage, one may use a specific defined slice and partition, but put the slice and partition in the bmap header to avoid hard-coding assumptions and to allow for flexibility in Tier 3 storage configurations.
The LUN may be the bmap header because it may be needed by bcfs to identity which Storage Array RAID device to read from. Within a Storage Array RAID device, the blocks that are striped across the physical drives may be collectively presented as one (or more) LUNs to the video pump. Since a video pump may use multiple LUNb, it may be beneficial for the LUN ID in the bmap header to identify which RAID device to read from.
Block Descriptor
A block descriptor defines the location and size of a single contiguous block of data on the disk. A Logical Block Address (LBA) and a length, in multiples of the fs-block size, define the block's location.
Asset Caching with BCFS
The bmap file may be generated on the content writer 12 at the time an asset is loaded. A content syncher on the video pump 14 ensures that the bmap files in /assets/cache are up to date with respect to the files on the content writer. When a stream is assigned to a video pump, the streaming server process may then open the asset and read from the file as if the asset were stored locally.
When the streaming application makes a VOP_READ read request for the asset's data, bcfs_read then reads the block map and device information from the file and determines the logical block address where the asset's data resides; bcfs_read then issues one or more read requests for the data as needed and returns.
After an asset is removed from a content writer 12, the content syncher on a video pump 14 removes its bmap file in /assets/cache/assetname. At this time, the content syncher also clears the asset from the Tier 2 local disk cache.
Hint File Caching with BCFS
Hint files may contain asset metadata for performing “trick modes” (e.g. fast-forward, rewind). The hint files may be generated on the video pump by software when the asset is loaded onto storage. They contain pointers to particular scenes in the asset.
Hint files may be handled similarly to bmap files. The hint file may be generated on the content writer at the time an asset is loaded. The content syncher on the video pump ensures that the hint files in /assets/cache are up to date with respect to the files on the content writer. When a stream is assigned to a video pump, the Streaming Server can then open the hint file and use it, since it is locally cached.
Alternatively, if the latency requirements for streaming an asset as it is being ingested allow for it, a block map for the hint file may be cached on the video pump. Read requests for the hint file data may be handled by bcfs in the same way as asset reads, as described above.
Tiered Caching
As mentioned above, the video pumps 14 in the multi-server complex may use three tiers of content storage and caching. The purpose of tiered caching may be twofold: first, it decouples streaming bandwidth from asset storage bandwidth. With the addition of the cache, “hot spots” or instances of load imbalance among the arrays may be avoided. Without caching, load imbalance may easily occur due to a disproportionate number of streams being requested from any given storage array due to variations in asset popularity. Second, it reduces cost per stream by taking advantage of the natural variations in asset popularity by serving streams from assets of differing popularities from media types that are the most cost effective. For example, a single highly popular asset may be most cost-effectively served from RAM since the storage requirements are low but the bandwidth requirements are high. A large library of infrequently accessed content may be most cost-effectively served from inexpensive slow hard drives which have high storage capacity but low bandwidth The tiered caching implementation dynamically and automatically serves assets from the most cost (effective type of media based on the asset's popularity at the current time.
Tier 1 of the system may include local RAM, from which a relatively small number of the most popular assets may be served. Tier 2 may include a larger local disk cache that stores the most recently streamed assets in their entirety. The Tier 2 cache may be raw block-level storage with no replication and no file system. Both Tier 1 and Tier 2 are “store and forward” type caches. Tier 3 makes up the large, long-term storage with moderate streaming capacity, which must be roughly equal to the maximum number of unique streams being requested at any time.
In
In addition to the Tier 2 Local Disk cache amplification effect, there is the potential that an asset read from Tier 3 storage will not be required because it was previously read and still resides in the Tier 2 cache. How often this happens may be determined by the ratio of the total Tier 2 cache size to the total Tier 3 storage size. This ratio may be termed the Disk Cache Hit Ratio (DHCR).
Tier 1 RAM Cache
This section outlines one possible implementation of a Tier 1 RAM caching algorithm. This algorithm is described here to illustrate how the Tier 1 cache can operate in conjunction with the other cache tiers to support a scalable server complex.
Interval caching may be a buffer management policy that identifies segments of temporally related streams that can be efficiently cached to increase server throughput. The algorithm is described in A. Dan, et al., “Buffer Management Policy for an On-Demand video server,” in IBM Research Report RC 19347, hereby incorporated by reference. The algorithm's performance with respect to reducing server cost is examined in A. Dan, et al., “Buffering and Caching in Large-scale video servers,” in Proc. Compcon, pp. 217-224, March 1995, hereby incorporated by reference. An interval may be defined as a pair of consecutive stream requests on the same asset that overlap in time. Interval caching allows the following stream in a pair to be served from cache, with an instantaneous cache requirement equal to the size of the data represented by the offset in stream start times. If the stream starts are closely spaced, the savings can be significant. The interval-caching algorithm exploits this by maintaining a sorted list of all intervals and allocating cache to the set of intervals with the smallest cache requirement.
When compared to static asset replication, interval caching makes more effective use of cache memory and no a priori knowledge of asset popularity or manual asset replication is required. Streams in a trick mode do not participate in Tier 1 caching due to the changing interval relationships invoked by the trick mode, but they can still benefit from Tier 1 cache hits if the requested data happens to be in Tier 1 cache.
Tier 2 Local Disk Cache
The purpose of the Tier 2 cache may be to decouple streaming bandwidth from asset storage bandwidth by caching all of the assets as they are being read from external storage 20 in their entirety. It may include a large local drive array that may be managed as raw block-level storage with no replication and no file system. It may be a “store and forward” type of cache, meaning that the cache contents are written as the blocks are read and the store operation does not impact the flow of data from external storage. In the IP2160 Media Server (Midstream Technologies), for example, the Tier 2 cache resides on a portion of the internal drive array.
The Tier 2 Local Disk Cache may be similar in structure to Tier 1 Interval Cache. When the video pump issues a read on a stream that is being cached, i.e., the stream is being requested from external storage, the block may be copied to an allocated block in the Tier 2 cache and a hash table entry may be created that maps the file's block on disk to the cache block. Before the video pump issues a read, it may check the hash table to see if the block resides on local disk cache. If so, the blocks are read from the cache block instead of from the Tier 3 disk device.
Resource Controller Operation
When choosing a video pump 14 to handle the stream service request, the resource controller 10 may look for a server that is most likely to already have the asset in cache, but may direct the request to another server in the complex to make optimal use of the complex's resources. For example, suppose a single, highly popular asset is served from one video pump's Tier 1 RAM cache, and the number of viewer streams requesting this asset saturates the bandwidth streaming capacity of the video pump. In this case, the video pump will not be utilizing Tier 3 external storage interface or internal Tier 2 disk drives, resulting in a higher percentage of the global stream load emitted from these Tiers to be placed on the other video pumps 14 in the complex. Since the Tier 2 local cache disk bandwidth on a video pump may be limited, the resulting increase in cached streams on the other video pumps 14 may exceed their Tier 2 disk bandwidth, effectively limiting the number of streams that the complex can support. To maximize the throughput of the complex as a whole, a balanced mix of cacheable and non-cacheable assets must be maintained on each video pump, in proportions similar to that depicted in
One relatively simple algorithm used by the resource controller 10 may effectively maintain this balance. In order to minimize communications between the resource controller and the video pumps 14 that describe the dynamic cache state of each video pump, and to avoid having to store such information, the resource controller 10 stores a table with one entry per requested asset that indicates which video pump served a viewer stream from the asset. Since caches are most effective on streams that are closely spaced in time, directing new streams to the video pump 14 that last served an asset makes effective use of the caches. Additionally, the resource controller maintains a count of the number of streams currently active on each video pump. If this count exceeds a set threshold, the stream will be directed to the video pump with the least streaming bandwidth load. This mechanism ensures that highly popular assets get shared across video pumps 14, with no single video pump handling too many streams on any given highly popular asset. It also distributes the streaming load across video pumps 14.
In summary, an illustrative implementation of the stream load balancing algorithm may include the following steps:
The claims are not limited to the illustrative embodiments disclosed herein. For example, the foregoing disclosure of a distributed storage architecture based on block map caching and VFS stackable file system modules, as well as a scalable streaming video server complex based on tiered caching, uses explanatory terms, such as content writer, video pump, controller, and the like, which should not be construed so as to limit the scope of protection of this application, or to otherwise imply that the inventive aspects of the systems, devices and methods described herein are limited to the particular methods and apparatus disclosed. Moreover, as will be understood by those skilled in the art, many of the inventive aspects disclosed herein may be applied in computer systems that are not employed for streaming media or video-on-demand purposes. Similarly, the invention is not limited to systems employing VFS stackable file system modules and/or block maps as described above, or to systems employing specific types of computers, processors, switches, storage devices, memory, algorithms, etc. The content writers, video pumps, resource controller, etc., are essentially programmable computers that could take a variety of forms without departing from the inventive concepts disclosed herein. Given the rapidly declining cost of digital processing, networking and storage functions, it is easily possible, for example, to transfer the processing and storage for a particular function from one of the functional elements described herein to another functional element without changing the inventive operations of the system. In many cases, the place of implementation (i.e., the functional element) described herein is merely a designer's preference and not a hard requirement. Accordingly, except as they may be expressly so limited, the scope of protection is not intended to be limited to the specific embodiments described above.
This application claims the benefit of U.S. Provisional Application Nos. 60/589,578, entitled “Distributed Storage Architecture Based on Block Map Caching and VFS Stackable File System Modules,” filed on Jul. 21, 2004, and 60/590,431, entitled “A Scalable Streaming Video Server Complex Based on Tiered Caching,” filed on Jul. 23, 2004, each of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4731783 | Fontanes et al. | Mar 1988 | A |
5367636 | Colley et al. | Nov 1994 | A |
5375233 | Kimber et al. | Dec 1994 | A |
5515379 | Crisler et al. | May 1996 | A |
5566174 | Sato et al. | Oct 1996 | A |
5638516 | Duzett et al. | Jun 1997 | A |
5768598 | Marisetty et al. | Jun 1998 | A |
5781227 | Goode et al. | Jul 1998 | A |
5805804 | Laursen et al. | Sep 1998 | A |
5854924 | Rickel et al. | Dec 1998 | A |
5966162 | Goode et al. | Oct 1999 | A |
6112226 | Weaver et al. | Aug 2000 | A |
6119154 | Weaver et al. | Sep 2000 | A |
6128717 | Harrison et al. | Oct 2000 | A |
6138147 | Weaver et al. | Oct 2000 | A |
6148414 | Brown et al. | Nov 2000 | A |
6157051 | Allsup et al. | Dec 2000 | A |
6166730 | Goode et al. | Dec 2000 | A |
6182206 | Baxter | Jan 2001 | B1 |
6208335 | Gordon et al. | Mar 2001 | B1 |
6233607 | Taylor et al. | May 2001 | B1 |
6253271 | Ram et al. | Jun 2001 | B1 |
6289376 | Taylor et al. | Sep 2001 | B1 |
6301605 | Napolitano et al. | Oct 2001 | B1 |
6314572 | LaRocca et al. | Nov 2001 | B1 |
6314573 | Gordon et al. | Nov 2001 | B1 |
6499039 | Venkatesh et al. | Dec 2002 | B1 |
6535557 | Saito et al. | Mar 2003 | B1 |
6618363 | Bahl | Sep 2003 | B1 |
6745284 | Lee et al. | Jun 2004 | B1 |
6816891 | Vahalia et al. | Nov 2004 | B1 |
6898681 | Young | May 2005 | B2 |
6950915 | Ohno et al. | Sep 2005 | B2 |
6986019 | Bagashev et al. | Jan 2006 | B1 |
7051176 | Meiri et al. | May 2006 | B2 |
7058721 | Ellison et al. | Jun 2006 | B1 |
7096481 | Forecast et al. | Aug 2006 | B1 |
7197516 | Hipp et al. | Mar 2007 | B1 |
7233946 | McPolin | Jun 2007 | B1 |
7305529 | Kekre et al. | Dec 2007 | B1 |
7318190 | Edirisooriya | Jan 2008 | B2 |
20010001870 | Ofek et al. | May 2001 | A1 |
20010004767 | Gordon et al. | Jun 2001 | A1 |
20010019336 | Gordon et al. | Sep 2001 | A1 |
20010034786 | Baumeister et al. | Oct 2001 | A1 |
20010037443 | Liu | Nov 2001 | A1 |
20010044851 | Rothman et al. | Nov 2001 | A1 |
20020007417 | Taylor et al. | Jan 2002 | A1 |
20020026645 | Son et al. | Feb 2002 | A1 |
20020064177 | Bertram et al. | May 2002 | A1 |
20020133491 | Sim et al. | Sep 2002 | A1 |
20020156973 | Ulrich et al. | Oct 2002 | A1 |
20030005457 | Faibish et al. | Jan 2003 | A1 |
20030031176 | Sim | Feb 2003 | A1 |
20030077068 | Lin et al. | Apr 2003 | A1 |
20030079018 | Lolayekar et al. | Apr 2003 | A1 |
20030217082 | Kleiman et al. | Nov 2003 | A1 |
20030217119 | Raman et al. | Nov 2003 | A1 |
20030221197 | Fries et al. | Nov 2003 | A1 |
20040088288 | Pasupathy et al. | May 2004 | A1 |
20040133570 | Soltis | Jul 2004 | A1 |
20040133607 | Miloushev et al. | Jul 2004 | A1 |
20040139047 | Rechsteiner et al. | Jul 2004 | A1 |
20040158867 | Mack et al. | Aug 2004 | A1 |
20050055501 | Guha et al. | Mar 2005 | A1 |
20050071393 | Ohno et al. | Mar 2005 | A1 |
20050198451 | Kano et al. | Sep 2005 | A1 |
20050223107 | Mine et al. | Oct 2005 | A1 |
20060010180 | Kawamura et al. | Jan 2006 | A1 |
20060075005 | Kano et al. | Apr 2006 | A1 |
Number | Date | Country |
---|---|---|
0 781 002 | Jun 1997 | EP |
0004719 | Jan 2000 | WO |
0033567 | Jun 2000 | WO |
0042776 | Jul 2000 | WO |
0045590 | Aug 2000 | WO |
0059202 | Oct 2000 | WO |
0059203 | Oct 2000 | WO |
0059220 | Oct 2000 | WO |
0059228 | Oct 2000 | WO |
0131605 | May 2001 | WO |
0143434 | Jun 2001 | WO |
0143438 | Jun 2001 | WO |
0152537 | Jul 2001 | WO |
0155860 | Aug 2001 | WO |
0155877 | Aug 2001 | WO |
0156290 | Aug 2001 | WO |
0245308 | Jun 2002 | WO |
2004034707 | Apr 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20060064536 A1 | Mar 2006 | US |
Number | Date | Country | |
---|---|---|---|
60589578 | Jul 2004 | US | |
60590431 | Jul 2004 | US |