Data transmission

Information

  • Patent Grant
  • 7761901
  • Patent Number
    7,761,901
  • Date Filed
    Monday, March 8, 2004
    20 years ago
  • Date Issued
    Tuesday, July 20, 2010
    13 years ago
Abstract
Data to be transmitted over a network includes a first part (perhaps audio) which is always to be transmitted and alternative second parts (perhaps video coded at different compression rates) of which one is to be chosen for transmission, depending on the current network capacity. In order to accommodate systems where (owing perhaps to the use of a congestion control mechanism such as TCP) the capacity available is initially undetermined, one firstly begins to transmit just the first part. When the network capacity becomes known (typically by monitoring the performance of the network in transmitting the first part), one of the alternative second parts is chosen and transmission of it commences. If desired this may initially be preferentially to, or to the exclusion of, further transmission of the first part.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is the US national phase of international application PCT/GB2004/000974 filed 8 Mar. 2004 which designated the U.S. and claims benefit of GB 0306296.5, dated 19 Mar. 2003, the entire content of which is hereby incorporated by reference.


BACKGROUND OF THE INVENTION

1. Technical Field


This invention is concerned with transmission of data from a sending station to a receiving terminal. More particularly, it envisages transmission over a telecommunications network where the transmitted bit-rate that the network can support is initially undetermined.


2. Related Art


The situation addressed can arise, for example, when the rate fluctuates owing to the use of a congestion control mechanism. For example, the TCP/IP system uses IP (Internal Protocol) for transport. This is a connectionless service and simply transports transmitted packets to a destination. TCP (Transmission Control Protocol) is an overlay to this service and brings in the idea of a connection; the sending station transmits a packet and waits for an acknowledgement from the receiving terminal before transmitting another (or in the event of no acknowledgement within a timeout period, it retransmits the packet). More importantly (in the present context) it embodies a congestion control algorithm where it begins with a small packet size and progressively increases the packet size until packet loss occurs, whereupon it reduces the size again. After this initial “slow start” phase, the algorithm continues to increase the packet size (albeit more gradually) backing off whenever packet loss occurs; necessarily this involves some cyclic variation of the packet size. A description of TCP is to be found in “Computer Networks”, by Andrew S. Tanenbaum, third edition, 1996, pp. 521-542.


Another common protocol is UDP (User Data Protocol). This does not have a congestion control mechanism of its own, but it has been proposed to add one to it, the so-called “TCP-Friendly Rate Protocol (TFRC) described in the Internet Engineering Task Force (IETF) document RFC3448. This aims to establish an average transmitting data rate similar to that which the TCP algorithm would have achieved, but with a smaller cyclic variation. It too exhibits the same “slow start” phase.


One drawback of this slow start system is that the transmitting station will not “know” what bit-rate the network will provide until the slow start phase is complete—which may take (depending on the round-trip time of the network) as much as several seconds. For some applications this does not matter, but in others it does: for example, when streaming video from a server which provides a choice of compression rates, the server cannot make an informed decision at the outset about which rate to choose. In the past, one method of dealing with this has been that the server starts by transmitting the lowest quality stream and switches up to a higher rate if and when it finds that the network can support it.


BRIEF SUMMARY OF THE INVENTION

It should be stressed that the invention does not require that either of the two protocols discussed above should be used; it does however start from the assumption that one is to transmit over a connection the bit-rate of which does not become apparent until after transmission has begun.


According to one aspect of the present invention there is provided a method of transmitting data over a network having initially undetermined transmission capacity, in which the data comprise a first part and at least two alternative second parts corresponding to respective different resolutions, for presentation at a receiving terminal simultaneously with the first part, comprising:


(a) transmitting at least an initial portion of the first part;


(b) receiving data indicative of the available transmission capacity;


(c) choosing among the alternative second parts, as a function of the data indicative of the available transmission capacity;


(d) transmitting the chosen second part and any remainder of the first part.


Other aspects of the invention are defined in the claims.





BRIEF DESCRIPTION OF THE DRAWINGS

Some embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:



FIG. 1 is a block diagram of a transmission system in accordance with one embodiment of the invention; and



FIG. 2 is a flowchart illustrating the operation of the server shown in FIG. 1.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

In this first example, a server 1 is to stream video and audio to a receiving terminal 2 via a network 3. Supposing that this material is pre-recorded, then the audio data is contained in a file A stored in a store 10, along with several versions V1, V2, V3 of the video, encoded at different compression rates.


At this point some explanations are in order, to avoid confusion. Reference will be made to the rate of audio or video material. This refers to the average rate at which bits were generated by the original encoder, which (apart from possible small differences in frequency references at the transmitting and receiving ends) is also equal to the average rate at which bits are consumed by the ultimate decoder. Even in constant bit-rate video compression, the instantaneous bit-rate varies according to picture content but is smoothed to a constant rate by the use of buffering. By “transmitting bit-rate” we mean the actual rate at which data are transmitted by the transmitting station.


For the purposes of the present example, we suppose that the audio file A has been encoded by means of some suitable compression algorithm at 4.8 kbit/s, whilst the video files V1, V2, V3 are encoded at 10, 20 and 40 kbit/s respectively, perhaps using one of the well known encoding standards such as the ITU H.261, H.283, or one of the ISO MPEG algorithms.


The server 1 has a TCP interface 11, connected by a modem 12 to the network 3 such as the internet. The TCP interface is entirely conventional and will not be described further. It has an input 111 for data, an output 112 for sending data packets to the modem 12, and a control output 113 which indicates to the remainder of the server whether it is permitted to deliver further data to the input 111. A control unit 13 serves to read audio and video data from the store 10, and to deliver it to the input 111 of the TCP interface 11. The data delivered to the input is also monitored by a unit 14 whose function will be described later. There is also a timer 15.


It has already been explained in the introduction that initially the server has no information about the available transmitting rate that the TCP interface 11 can deliver on to the network, and in consequence is unable to make an informed decision as to which of the three alternative video files V1, V2, V3 it should send. The rationale of the operation of this server, noting that it has only one audio file and hence no choice as to audio bit-rate, is that it delivers audio only to the interface input 111, until such time as the slow start phase of the TCP is complete (or at least, has progressed sufficiently to enable a video rate decision to be made). The purpose of the rate monitoring unit 14 is to recognize when this point has been reached. In essence, it counts the bits delivered to the interface 11, and contains (or is connected to) a timer so that it can calculate the actual transmitting bit rate that this number of bits represents. This measurement could be made over one round-trip time, but, in order to smooth out oscillations of the bit rate, one might choose to average it over a time window that is however short enough that it does not delay the recognition process significantly. Typically one might use a window length corresponding to twice (or other low multiple) of the length of the round-trip time. Thus, the monitoring unit 13 has an output 131 which indicates the current transmitting bit-rate RT.


The system operation will now be described with reference to the flowchart shown in FIG. 2. Operation begins at Step 400, where a parameter RP representing the previous current transmitting rate is initialized to a high value, and the timer 15 is reset. At Step 401 the control unit tests the interface output 113 until it is clear to send data. Once this test is passed it reads (Step 402) audio data from the file A in the store 10 and transfers this to the interface 11. The interface 11 transmits this in accordance with normal TCP.


The control unit then interrogates the output of the monitoring unit 14 and performs some tests of the value of the current transmitting bit-rate RT and also of the timer 15 (although naturally it is not to be expected that these tests will be passed on the first iteration). Thus if (Step 403) the transmitting rate exceeds the rate needed to transmit audio plus full rate video (i.e. 44.8 kbit/s), further monitoring of the slow start phase becomes unnecessary and the process jumps to Step 408 (described below). If not, then at Step 404 RT is tested to determine whether it exceeds its previous value. If so it is assumed that the slow start phase is still in progress. RP is set equal to RT in Step 405 and the process is repeated from Step 401. If however RT≧RP then the slow start phase is deemed to have ended. RP is set equal to RT in Step 406 and the process moves on to a second phase. In the case of high round-trip times on the network, it can take a long time for the slow-start mechanism to conclude, and therefore also a test at Step 407 checks the state of the timer 15 and if this exceeds a maximum permitted waiting time the process moves on to the second phase where the video rate decision is then made on the basis of the known available transmitting bit-rate, even though this might not be the maximum.


This second phase begins with the control unit making, at Step 408, a decision as to which video rate to use. In this example, it chooses the highest rate that, with the audio, represents a total bit-rate requirement less than or equal to RT, viz.:


if RT≧44.8 choose V3


if 44.8>RT≧24.8 choose V2


if 24.8>RT≧14.8 choose V1


if RT<14.8 5 transmission is not possible; exit at Step 409.


Once this decision has been made, the control unit then proceeds at Step 410 to read video data from the relevant file to the TCP interface 11. It should be stressed that the initial part of this video data is contemporary (in terms of the original recorded material) with the audio already sent. Inherent in Step 410, but conventional and hence not shown explicitly, are flow control (analogous to Step 401), flow control feedback from the receiving terminal (so that the amount of data received does not cause buffer overflow) and the possibility of switching to a higher or lower rate video file in the event that network conditions improve or deteriorate, respectively.


One issue that should be considered, though, is the fact that, because, during the start-up phase, only audio has been sent, the audio buffer at the receiving terminal is ahead of the video buffer. This may be considered desirable (to a degree at least) in providing a greater resilience to short-term network problems for the audio than for the video, so that in the event of packet loss causing video buffer underflow and hence loss of video at the receiving terminal, the user may continue to listen to the accompanying sound. But, if desired, the video streaming Step 410 may temporarily, during an initial period of this second phase, slow down or even suspend transmission of audio data, until the contents of the audio and video buffers at the receiving terminal reach either equality (in terms of playing time) or some other specified relationship. Naturally this has the benefit of increasing the amount of video data that can be sent during this initial period.


Possible modifications to the arrangements shown in FIG. 1 include:


(a) The use of a UDP interface, with TFRC congestion control, as discussed in the introduction, in place of the TCP interface 11. In this case, because TFRC explicitly calculates the actual transmitting rate, it may be possible to dispense with the monitoring unit 13 and instead read the transmitting rate RT directly from the UDP/TFRC interface. Recognition of the end of slow start may still be performed as shown in Step 404 of the flowchart by comparing RT and RP; alternatively it may be possible to recognize it by observing when the packet loss reaches a specified level.


(b) The above description assumed that one would choose the highest video rate that the network would support; however the option also exists of deliberately choosing a lower rate in order to reduce or even eliminate the delay that occurs at the receiving terminal while the receiver video buffer is being filled to an acceptable level. Such measures are discussed in our international patent application no. PCT/GB 01/05246 (Publication no. WO 02/45372).


(c) The above description assumed that the video and audio data originated from stored files. However this method may be applied to the transmission of a live feed, provided that the server includes additional buffering so that the video can be held at the server during the initial period of audio-only transmission.


(d) Alternative audio rates can be accommodated provide a criterion can be found whereby a decision between them can be made without recourse to any information about current network conditions. An example of this might be of an internet service that can be accessed via different access routes having vastly different bandwidths, perhaps via a standard (PSTN) telephone line and a 56 kbit/s modem on the one hand and an ADSL connection at 500 kbit/s on the other. If the system has two alternative audio rates, say 4.8 kbit/s and 16 kbit/s and one makes the reasonable assumption that the PSTN connection can never support the higher rate and the ADSL connection always can, then if the server is informed by the receiving terminal (either automatically or manually) as to which type of access line is in use it can make a decision of which of the two audio rates to choose, based on this information alone. Once that decision has been made, the process can proceed in the manner already described.


In principle, the streaming method we have described will work with a conventional receiver. However, the benefits of the proposed method will be gained only if the receiver has the characteristic that, before beginning to play the received material, it waits until both its audio buffer and its video buffer contain sufficient data. In general, established video standards do not specify this functionality, leaving it to the discretion of the receiver designer. Of the receivers currently available, some have this characteristic whereas others, for example, may begin to decode and play audio as soon as the audio buffer is adequately full, even when no video data have arrived. We recommend that either one chooses a receiver of the former type, or one modifies the receiver control function so as to monitor the buffer contents and to initiate playing only when both buffers contain sufficient data (in accordance with the usual criteria) to support continuous playout.


A second embodiment of the invention will now be described. This is similar to the first, except that it uses layered video coding. That is to say, instead of having several (three, in the above example) different versions of the video source only one of which is sent, one has a base layer source, which can be decoded by itself to give a low-quality video output and an enhancement layer which is useless by itself but can be decoded together with the base layer to produce a higher quality video output; and one may have further enhancement layers each of which is usable only in combination with the base layer and the intervening layer(s). In this example we also suppose that multiple (non-layered) audio rates are available. We recall that in the slow-start phase one has to transmit data in advance of deciding between the various alternative sources, and that the rationale of choosing to transmit the audio first was that since there was only one audio rate one knew that this would inevitably have to be transmitted, whatever the rate decision. In this second example with alternative audio rates this ceases to be the case, since neither audio stream qualifies as “always to be transmitted”. However the video base layer does so qualify, and thus in this case one proceeds by commencing transmission of the video base layer in Step 402. Then in step 408 one selects the video and audio rates to be used and in Step 410 commences transmission of the selected audio and the enhancement layer(s), if any, appropriate to the chosen video rate. In this instance, when transmitting enhancement layer video it would be appropriate to cease transmitting base layer video until all the enhancement layer video contemporary with the base layer video already sent has been transmitted.


Of course, if only a single audio rate were used, then both audio and base layer video could be sent during the slow-start phase.


A third embodiment, similar to the second, uses video encoded using frame rate scalability in accordance with the MPEG4 standard. An encoded MPEG sequence consists of I-frames (encoded using intra-frame coding only), P-frames (encoded using inter-frame differential coding based on the content of a preceding I-frame) and B-frames (encoded using bi-directional inter-frame prediction based on neighboring I and P-frames). A typical MPEG sequence might be IBBPBBPIBBP etc. In frame rate scaleable coding one transmits for the lowest bit-rate stream just the I-frames; for a higher bit-rate stream, the I and P-frames, and for a higher still bit-rate, all the frames. In this instance one proceeds by transmitting only I-frames at Step 402 during the slow-start phase.


A yet further example is the transmission of a page for display (a “web page”) consisting of text and graphics. The idea here is slightly different from the preceding examples in that we are not now concerned with the transmission of material that has to be presented to the user in real time. Nevertheless, it is considered advantageous to provide, as we provide here, for alternative graphics resolutions. So the store 10 contains text, for example in the form of an html file, and separate image files corresponding to one or more images which the receiving terminal is to display, in conventional manner, as part of a composite display. For each image there are several, perhaps three, image files stored in the store 10, at three different resolutions. The text, or the beginning of it, is transmitted in Step 402 during the slow-start phase. At Step 408 a decision is made, based on the available transmitting rate RT, as to which resolution to choose, the idea being that one chooses the highest resolution that allows the images to be transmitted in a reasonable period. The exit at 409 is not necessary in this case. Then at step 410 the remaining text (if any) is transmitted, followed by the file of the chosen resolution for the or each image. If the chosen files are renamed with filenames corresponding to those embedded in the text (i.e. independent of the resolution) then no modification at the receiving terminal is necessary and it can display the composite page using a standard web browser.

Claims
  • 1. A method of transmitting data initially in a start-up mode over a network having initially undetermined transmission capacity, in which the data comprises a single first part including no alternatives therein relating to different resolutions and at least two alternative second parts in which the at least two alternative independent second parts respectively corresponding to different independently alternative resolutions which may be independently transmitted without regard for the other alternative second part(s), for synchronized presentation at a receiving terminal simultaneously with the first part, said method comprising: (a) in an initial start-up mode for starting data transmission over a particular communications channel, initially transmitting at least an initial portion of only the single first part;(b) receiving data indicative of the available transmission capacity for said initial transmission in step (a);(c) choosing one from among the alternative independent second parts corresponding to respectively different resolutions for immediate transmission following step (b) in a second phase of said start-up mode, as a function of the data indicative of the available transmission capacity received in step (b); and(d) thereafter, in a third phase of said start-up mode, transmitting only the chosen one independent second part and any remainder of the single first part for synchronized presentation at a receiving terminal simultaneously with the first part during said initial start-up mode, thereby quickly achieving initial transmission of said second part at a resolution suited to available transmission capacity.
  • 2. A method according to claim 1 including the step of generating said data indicative of the available transmission capacity by monitoring the transmission by the network of the said initial portion of the single first part.
  • 3. A method according to claim 1 in which, in an initial time period of step (d), transmission of a leading part of the chosen second part of an extent corresponding to the extent of the single first part already transmitted is performed preferentially to, or to the exclusion of, further transmission of the single first part, thereby causing the transmission of the synchronized second part to catch up with its corresponding first part.
  • 4. A method for transmitting related audio and video digitized data representing an audio-visual presentation over a communications network having an initially undetermined transmission capacity wherein the audio data includes only a single version thereof, said method comprising: (a) initially transmitting digitized audio data of said audio-visual presentation over a communications network without corresponding related digitized video data;(b) determining available transmission capacity of the communications network based on said initial transmission of audio data for which there is only a single version to be transmitted;(c) selecting one of plural corresponding but different resolution related digitized video data of said audio-visual presentation as a function of the determined available transmission capacity; and(d) thereafter continuing to transmit (i) said digitized audio data in the same single available version as previously transmitted, and (ii) the selected resolution related digitized video data over said communications network as selected in step (c) to provide a coordinated audio-video presentation.
Priority Claims (1)
Number Date Country Kind
0306296.5 Mar 2003 GB national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/GB2004/000974 3/8/2004 WO 00 9/19/2005
Publishing Document Publishing Date Country Kind
WO2004/084520 9/30/2004 WO A
US Referenced Citations (120)
Number Name Date Kind
4813044 Kumar et al. Mar 1989 A
5140417 Tanaka et al. Aug 1992 A
5159447 Haskell et al. Oct 1992 A
5363138 Hayashi et al. Nov 1994 A
RE34824 Morrison et al. Jan 1995 E
5511054 Oishi et al. Apr 1996 A
5535209 Glaser et al. Jul 1996 A
5561466 Kiriyama Oct 1996 A
5566208 Balakrishnan Oct 1996 A
5675696 Ishimoto et al. Oct 1997 A
5706504 Atkinson et al. Jan 1998 A
5748955 Smith May 1998 A
5751741 Voith et al. May 1998 A
5754849 Dyer et al. May 1998 A
5818818 Soumiya et al. Oct 1998 A
5822524 Chen et al. Oct 1998 A
5864678 Riddle Jan 1999 A
5874997 Haigh Feb 1999 A
5892881 Takishima et al. Apr 1999 A
5898671 Hunt Apr 1999 A
5909434 Odenwalder et al. Jun 1999 A
5915130 Kim Jun 1999 A
5918020 Blackard et al. Jun 1999 A
5928330 Goetz et al. Jul 1999 A
5956321 Yao et al. Sep 1999 A
5960452 Chi Sep 1999 A
6011779 Wills Jan 2000 A
6014694 Aharoni et al. Jan 2000 A
6014706 Cannon et al. Jan 2000 A
6023732 Moh et al. Feb 2000 A
6061732 Korst et al. May 2000 A
6065104 Tng May 2000 A
6081843 Kikki et al. Jun 2000 A
6092115 Choudhury et al. Jul 2000 A
6097697 Yao et al. Aug 2000 A
6104441 Wee et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6124878 Adams et al. Sep 2000 A
6181821 Lim Jan 2001 B1
6216173 Jones et al. Apr 2001 B1
6226329 Ishibashi May 2001 B1
6269078 Lakshman et al. Jul 2001 B1
6269978 Sindoni Aug 2001 B1
6275534 Shiojiri Aug 2001 B1
6285661 Zhu et al. Sep 2001 B1
6310857 Duffield et al. Oct 2001 B1
6324165 Fan et al. Nov 2001 B1
6373855 Downing et al. Apr 2002 B1
6411602 Schoenblum et al. Jun 2002 B2
6430620 Omura et al. Aug 2002 B1
6470378 Tracton et al. Oct 2002 B1
6480541 Girod et al. Nov 2002 B1
6487528 Vossing et al. Nov 2002 B1
6493388 Wang Dec 2002 B1
6501797 van der Schaar et al. Dec 2002 B1
6532242 Tahara Mar 2003 B1
6573907 Madrane Jun 2003 B1
6593930 Sheaffer et al. Jul 2003 B1
6614843 Gordon et al. Sep 2003 B1
6618363 Bahl Sep 2003 B1
6618381 Miyamoto et al. Sep 2003 B1
6625119 Schuster et al. Sep 2003 B1
6637031 Chou Oct 2003 B1
6640086 Wall Oct 2003 B2
6661777 Blanc et al. Dec 2003 B1
6697369 Dziong et al. Feb 2004 B1
6700893 Radha et al. Mar 2004 B1
6701372 Yano et al. Mar 2004 B2
6731097 Richards et al. May 2004 B1
6738386 Holmqvist May 2004 B1
6744815 Sackstein et al. Jun 2004 B1
6754189 Cloutier et al. Jun 2004 B1
6810425 Yamamoto Oct 2004 B2
6813275 Sharma et al. Nov 2004 B1
6850564 Pejhan et al. Feb 2005 B1
6909693 Firoiu et al. Jun 2005 B1
6920178 Curet et al. Jul 2005 B1
6940903 Zhao et al. Sep 2005 B2
6993075 Kim et al. Jan 2006 B2
6993604 Dixon Jan 2006 B2
7058723 Wilson Jun 2006 B2
7106758 Belk et al. Sep 2006 B2
7116714 Hannuksela Oct 2006 B2
7142509 Rovner et al. Nov 2006 B1
7191246 Deshpande Mar 2007 B2
20010001614 Boice et al. May 2001 A1
20010028463 Iwamura Oct 2001 A1
20010040700 Hannuksela et al. Nov 2001 A1
20020007416 Putzolu Jan 2002 A1
20020009096 Odenwalder Jan 2002 A1
20020010938 Zhang et al. Jan 2002 A1
20020016827 McCabe et al. Feb 2002 A1
20020038374 Gupta et al. Mar 2002 A1
20020041585 Bahl Apr 2002 A1
20020057889 Ando et al. May 2002 A1
20020083184 Elliott Jun 2002 A1
20020114292 Kawabata et al. Aug 2002 A1
20020131496 Vasudevan et al. Sep 2002 A1
20020167942 Fulton Nov 2002 A1
20030037158 Yano et al. Feb 2003 A1
20030072370 Girod et al. Apr 2003 A1
20030076858 Deshpande Apr 2003 A1
20030103515 Brown et al. Jun 2003 A1
20030153311 Black Aug 2003 A1
20030169932 Li et al. Sep 2003 A1
20030174609 Choi Sep 2003 A1
20040078460 Valavi et al. Apr 2004 A1
20040114684 Karczewicz et al. Jun 2004 A1
20040153951 Walker et al. Aug 2004 A1
20040181817 Larner Sep 2004 A1
20040190600 Odenwalder Sep 2004 A1
20050010697 Kinawi et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050044254 Smith Feb 2005 A1
20050120038 Jebb et al. Jun 2005 A1
20050172028 Nilsson et al. Aug 2005 A1
20060133514 Walker Jun 2006 A1
20060171666 Im Aug 2006 A1
20080250454 Nishina et al. Oct 2008 A1
20090133075 Nishina et al. May 2009 A1
Foreign Referenced Citations (37)
Number Date Country
0418396 Mar 1991 EP
0763944 Mar 1997 EP
0 939 545 Sep 1999 EP
0948211 Oct 1999 EP
1045555 Oct 2000 EP
1120966 Aug 2001 EP
1128610 Aug 2001 EP
1130921 Sep 2001 EP
1241891 Sep 2002 EP
2363277 Dec 2001 GB
2367219 Mar 2002 GB
07-123172 May 1995 JP
7-236136 Sep 1995 JP
7-264580 Oct 1995 JP
09093553 Apr 1997 JP
9-261613 Oct 1997 JP
9-298734 Nov 1997 JP
10-126771 May 1998 JP
10-164533 Jun 1998 JP
10-262245 Sep 1998 JP
11-164270 Jun 1999 JP
11-184780 Jul 1999 JP
11-187367 Jul 1999 JP
11-308271 Nov 1999 JP
11-313301 Nov 1999 JP
2000-151705 May 2000 JP
2001-144802 May 2001 JP
2000-28335 May 2000 KR
WO 9826604 Jun 1998 WO
0001151 Jan 2000 WO
WO 0001151 Jan 2000 WO
0035201 Jun 2000 WO
WO 0041365 Jul 2000 WO
WO 0049810 Aug 2000 WO
WO 0062552 Oct 2000 WO
WO 0139508 May 2001 WO
WO 02054776 Jul 2002 WO
Related Publications (1)
Number Date Country
20060182016 A1 Aug 2006 US