Method for delivering data over a network

Abstract
This invention describes a new method and system for delivering data over a network to a large number of clients, which may be suitable for building large-scale Video-on-Demand (VOD) systems. In current VOD systems, the client may suffer from a long latency before starting to receive the requested data that is capable of providing sufficient interactive functions, or the reverse, without significantly increasing the network load. The method utilizes two groups of data streams, one responsible for minimizing latency while the other one provides the required interactive functions. In the anti-latency data group, uniform, or non-uniform or hierarchical staggered stream intervals may be used. The system being realized based on this invention may have a relatively small startup latency while users may enjoy most of the interactive functions that are typical of video recorders including fast-forward, forward-jump, and so on. Furthermore, this invention may also be able to maintain the number of data streams, and therefore the bandwidth, required.
Description


FIELD OF THE INVENTION

[0001] This invention relates to methods and systems for delivering data over a network, particularly those for delivering a large amount of data with repetitive content to a large number of clients, like Video-on-Demand (VOD) systems.



BACKGROUND OF THE INVENTION

[0002] Current VOD systems face a number of challenges. One of them is how to provide the clients, which may be in the number of millions, with sufficient interactivity like fast-forward/backward and/or forward/backward-jump. At the same time, the provision of such functions should not impose severe network load, as the network resources namely the bandwidth may be limited. Furthermore, every client generally prefers to have the movie he selects to be started as soon as possible.


[0003] The following sections describe some of the currently used VOD systems and their possible disadvantages:


[0004] 1. Near-VOD (NVOD) with Regular Stream-Interval


[0005] A NVOD system consists of staggered multicast streams with regular stream interval T (FIG. 1). The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time-division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc. . . . ). The distribution mechanisms include point-to-point, point-to-multipoint and other methods. Each stream is divided into regular segments of interval T, and the segments are labelled 1, 2, 3, . . . , N respectively, The content that is to be distributed to the users is carried on the N segments and the content is replicated on all these streams. The content is also repeated on each stream in time. By using such a staggered streaming arrangement with regular stream interval T, the users are guaranteed to receive the content at any time with a start-up latency less than T. However, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.


[0006] 2. Quasi-VOD (QVOD) with Irregular Stream-Interval


[0007] A QVOD system consists of staggered multicast streams with irregular stream intervals (FIG. 2). The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time-division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc. . . . ). The distribution mechanisms include point-to-point, point-to-multipoint and other methods. Unlike the NVOD system where the streams constantly exist, the streams in a QVOD system are created on demand from the users' request for the content. The users' requests within a certain time interval Ti are batched together and served together by Stream i. The stream intervals T1, T2, . . . Ti, . . . are irregular. The streams (Stream 1 to i etc. . . . ) are all provided on-demand and will be removed as soon as the content distribution has been completed. The streams are constantly created as users' requests come in. By using such a staggered streaming arrangement with irregular stream interval Ti, the particular group of users starting within interval Ti is guaranteed to receive the contents within Ti (start-up latency). Again, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.


[0008] 3. Distributed Interactive Network Architecture (DINA)


[0009] DINA system refers to the method and system as described in the applicant's PCT applications PCT/IB00/001857 & 001858. In the DINA system, interactive functions including fast-forward/backward, forward/backward-jump, slow motions, and so on can be provided by a plurality of multicast video data streams in conjunction with a plurality of distributed interactive servers. Although interactive functions may be provided to the client in such the DINA system, the network load may increases if the start-up time for each user's request is to be reduced. This is determined by the stream interval of the multicast data streams, Generally, the number of data streams, and therefore the network load, increases with the decrease of the stream interval.


[0010] In the NVOD and QVOD systems, a user wanting to view the content will simply tap into one of the many staggered streams and view the content simultaneously with all others sharing the stream. While such schemes are simple and efficient, they suffer from two difficulties—a large start-up latency and user inflexibility.


[0011] For the first difficulty, a user may have to wait as long as one stream interval T before the request is served, and the waiting time may be as large as many minutes or even hours, depending on the stream interval. Although the stream interval can be made very small, say even down to a few seconds, this also means that the system has to provide a large number of streams for serving the same amount of content, The number of streams required is simply
1RT,


[0012] where R is the length of the content and T is the stream interval. Thus, small start-up latency may incur a much higher transmission bandwidth and cost. The DNA system may also face such a difficulty.


[0013] For the second difficulty, the users viewing a multicast stream cannot freely interrupt the stream because there are other viewers. Therefore, NVOD and QVOD systems cannot allow VCR-liked interactivity such as pause, resume, rewind, slow motion, fast forward, and so on. These systems also hinder the introduction of new forms of interactive media to be deployed. In recent years, one popular approach to offer some form of VCR-liked interactivity over NVOD and QVOD systems is to add a storage unit to the set top box (STB) so as to cache all the available content being broadcast. Such systems suffer from a higher system cost and operational problems like storage unit failure and management.


[0014] It can be realised that the prior art may fail to provide a solution to the existing problems in VOD systems. Specifically, current VOD systems may not be able to provide the clients/users with desired interactive functions with a short start-up time, while at the same time minimising the network load. Therefore, it is an object of this invention to resolve at least some of the problems at set forth in the prior art. As a minimum, it is an object of this invention to provide the public with a useful choice.



SUMMARY OF THE INVENTION

[0015] Accordingly, this invention provides, in the broad sense, a method and the corresponding system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client. The method of this invention includes the steps of:


[0016] generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by a client; and


[0017] generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.


[0018] The anti-latency data streams and the interactive data streams may be generated by at least one anti-latency signal generator and at least one interactive signal generator, respectively.


[0019] It is another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client including the step of fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.


[0020] The K data segments may be generated by a signal generator.


[0021] It is yet another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including the steps of:


[0022] generating at least one anti-latency data stream containing at least a leading portion of data for receipt by the client;


[0023] pre-fetching the leading portion in the client as pre-fetched data; and


[0024] generating at least one interactive data stream containing at least a remaking portion of said data for the client to merge into the leading portion.


[0025] This invention also provides a method and the corresponding system for transmitting data over a network to at least one client including the steps of generating a plurality of anti-latency data streams, in which the anti-latency data streams include:


[0026] a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and


[0027] a plurality of finishing data streams, each of the finishing data streams:


[0028] containing at least the rest of the leading portion of said data; and


[0029] repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.


[0030] This invention further provides a method and the corresponding system for transmitting data over a network to at least one client. The method includes the steps of generating M anti-latency data streams from 1 to M, wherein an mth anti-latency data stream has Fm segments, and Fm is an mth Fibonacci number; and wherein said Fm segments are repeated continuously within the mth anti-latency data stream.


[0031] It is yet another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transit over the network, The method includes the steps of generating M anti-latency data streams containing 1 to K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an kth leading segment is repeated by an anti-latency time interval kT within the anti-latency data streams.


[0032] This invention further provides a method for receiving data being transmitted over a network to at least one client. The data to be transmitted is fragmented into K segments each requiring a time T to transmit over the network. The data is divided into two batches of data streams, the anti-latency data streams include M anti-latency data streams, and the interactive data streams includes N interactive data streams. The method for receiving the data includes the steps of:


[0033] raising a request for said data. The request may be raised by a processor of the client; and


[0034] connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams. The client or the receiver may connect to the anti-latency data streams by a connector.


[0035] This invention also provides a method and a corresponding system for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data steam including the steps of:


[0036] pre-fetching the leading portion in the client as pre-fetched data, which is contained in the buffer of the client; and


[0037] merging the pre-fetched data to the remaining portion by a processor.


[0038] Further embodiments and options of the above methods and systems will be described in the following sections, and may then be apparent to one skilled in the art after reading the description.







BRIEF DESCRIPTION OF THE DRAWINGS

[0039] Preferred embodiments of the present invention will now be explained by way of example and with reference to the accompany drawings in which:


[0040]
FIG. 1 shows the data stream structure of a NVOD system.


[0041]
FIG. 2 shows the data stream structure of a QVOD system.


[0042]
FIG. 3 shows the overall system architecture of the data transmission system of this invention.


[0043]
FIG. 4 shows the data streams arrangement of Configuration 1 of the data transmission system of this invention.


[0044]
FIG. 5 shows the data streams arrangement of Configuration 2 of the data transmission system of this invention.


[0045]
FIG. 6 shows the data streams arrangement of Configuration 3 of the data transmission system of this invention. Note the difference in the arrangement of the Group II data streams comparing with FIGS. 4 & 5.


[0046]
FIG. 7 shows yet another Group I data streams arrangement of Configuration 3.


[0047]
FIG. 8 shows the data streams arrangement of Group I data streams of Configuration 4 of the data transmission system of this invention.


[0048]
FIG. 9 shows yet another arrangement of Group I data streams of Configuration 4 of the data transmission system of this invention.


[0049]
FIG. 10 shows one of the data streams arrangement of Configuration 5 of the data transmission system of this invention. The particular arrangement of Group I data streams shown in this figure combines Configurations 1 & 3.


[0050]
FIG. 11 shows the system configuration of a multicast data streams generator of the data transmission system of this invention.


[0051]
FIG. 12 shows the system configuration of receiver of the data transmission system of this invention.


[0052]
FIG. 13 shows the local storage versus transmission bandwidth trade-off relationship.







DETAIL DESCRIPTION OF PREFERRED EMBODIMENTS

[0053] This invention is now described by ways of example with reference to the figures in the following sections. Even though some of them may be readily understandable to one skilled in the art, the following Table 1 shows the abbreviations or symbols used through the specification together with their meanings so that the abbreviations or symbols may be easily referred to.
1TABLE 1Abbreviations and Symbols UsedAbbreviation/SymbolMeaningVODVideo-on-DemandNVODNear Video-on-DemandQVODQuasi Video-on-DemandDINADistributed Interactive Network Architecture, as described inPCT applications nos. PCT/IB00/001857 & 1858VORVideo Cassette-RecorderSTBSet-Top-BoxDDVRDiskless Digital Video Recorder, the client of the systemIVODInstant Video-on-Demand, possible name of the system ofthis inventionJno. of anti-latency data segments in an individual anti-latency datastream (in Configurations 1 to 3) or no. of data segments of theleading portion of the data to be transmitted (Configuration 4)Kno. of data segments of the data to be transmittedMno. of anti-latency (Group I) data streamsNno. of interactive (Group II) data streamsQamount of data to be transmittedRtime required to transmit Q data over the networkSamount of data in each data segmentTtime required to transmit each data segment over the networkAno. of data streams in Group I(1) streamsCno. of data segments in the data of Group I(1) streamsBno. of data streams in Group I(2) streamsDno. of data segments in the data of Group I(2) streamsEno. of data segments in the coarse jump interval


[0054] Although the following description refers to the data to be delivered as being video, it is expressly understood that data in other forms may also be delivered in the system of this invention, for example audio or software programs, or their combination. For instance, this invention may be used for deploying an operating system software to a large number of clients through a network upon request, Further, this invention may be utilised in data transmission systems handling a large amount of data with repetitive content, for instance in a video system bus of a computer handling many complicated but replicated 3D objects. Moreover, this invention may not be limited to the transmission of digital data only.


[0055] In this invention, a multi-stream multicasting technique is used to overcome the existing problems in VOD systems as described in the Background section. By using this technique, the users are allowed VCR-liked interactivity without the need to add a storage unit at the STB and caching all the content that may be viewed by tide user on a daily basis.


[0056]
FIG. 3 shows the system configuration. The multicast streams are generated from a multicast server unit. The streams are multiplexed onto the physical media and distributed to the end users through a distribution network. At each user end, there is a set top box (STB), such as DDVR, that selects a multitude of streams for processing. By arranging the content to be carried on the streams in a desired manner (as shown later in FIGS. 4-10), the start-up latency may be minimized while the users are provided with interactive functions. The DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams.


[0057] The data transmission system of this invention, which may be called an IVOD system, may look similar to the NVOD system. However, the IVOD and NVOD systems are differentiated by the following points:


[0058] 1. how the content is put on the staggered streams,


[0059] 2. how the staggered streams are generated,


[0060] 3. how the DDVR selects and processes the multitude of staggered streams to restore the content.


[0061] The word “staggered” used above and throughout the specification in describing the data streams refers to the situation that each of the data streams begins transmission at different times. Therefore, two “frames” of two adjacent data streams, in which the term “frame” represents the repeating unit of each data stream, are separated by a time interval.


[0062] In the broad sense, the data transmission method and system may be described as providing two groups of data streams Group I and II. Group I data streams, which may be term anti-latency data streams, may serve to reduce latency for starting-up the transmission of the required data. Group I data streams may be generated by at least one anti-latency signal generator. Group II data streams, which may be termed interactive data streams, may serve to provide the desired interactive functions to the users. Group II data streams may be generated by at least one interactive signal generator. For the interactive actions provide by Group II data streams, this can be referred to the applicant's PCT applications nos. PCT/IB00/001857 & 1858, the contents of which are now incorporated as references therein. The operation of the interactive functions is not considered to be part of the invention in this application and the details will not be further described here.


[0063] The operation of the IVOD system can best be illustrated by the following examples, Each of these examples is a valid IVOD system but they all differ in details with various tradeoffs. These examples only intend to show the working principles of IVOD systems and are not meant to describe the only possible ways of IVOD operation.


[0064] In the following examples, the content to be transmitted having a total amount of data Q requires a total time R to be transmitted over the network. The content, for example, may be a movie. The Q data is broken up into K segments each having an amount of data S. Each data segment requires a time T to be transmitted over the network. Q and S may be in the unit of megabytes, while R and T are units of time. For the sake of convenience, the data segments of the Q data are labelled from 1 to K respectively. Therefore,
2K=RT.


[0065] The Q data may be divided into a leading portion and a remaining portion. In most cases, the Group I anti-latency data streams may contain the leading portion only. The Group II interactive data streams may contain the remaining portion or the whole set of the Q data, and this may be a matter of design choice to be determined by the system manager.


[0066] It should be noted that the system may still work if the individual data segment contains different amounts of data than each other, provided that they all required a time T for transmission. This may be achievable by controlling the transmission rate of the individual data segment. However, individual data segments may be preferred to have same amount of data S for the sake of engineering convenience. On the other hand, it may be relatively more difficult to implement the system for each of the data segments to have same amount of data S but with different transmission times.


[0067] Although the following description refers to the transmission of one set of data, for instance, a movie, it should be apparent to one skilled in the art that the method and system may also be adapted to transmit a certain number of sets of data depending on, for example, the bandwidth available.



A. Dual Streaming IVOD System (Configuration 1)

[0068] The simplest IVOD system is characterized by a dual-streaming operation. Dual streaming means that each user will tap into at most two of the multicast data streams at any time. Most of the time, the user may only be tapping into one data stream.


[0069] The segments are put onto the staggered streams as shown in FIG. 4. There are two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame. T is the anti-latency time interval and may also be the upper bound for the start-up latency of the IVOD system. Each anti-latency data stream is preferably staggered by the anti-latency time interval T, although the anti-latency time interval may be set at any desired value other than T.


[0070] In this particular example, J is equal to 16 and T is 30 seconds. So the frames in each of the Group I data streams repeat themselves after a time of JT being 8 minutes. There are a total of M streams in Group T.


[0071] For Group II interactive data streams, there are N interactive data streams, with each of them being staggered by an interactive time interval. Although the interactive time interval may again be set at any desired value, the interactive time interval is preferably to be JT (i.e. 8 minutes in this example) for the sake of engineering convenience, Assuming the length of the content is R (say R equals to 120 minutes), then there should be at least a total of
3RJT=15


[0072] streams in Group II. N may be larger than this value but this may create unnecessary network load.


[0073] When a user starts to view the content at time ti, the DDVR at the user end will select one stream from Group I (Stream Ii) and one stream from Group II (Stream IIj) to tap into. Once the client connects to Streams Ii and/or IIj, the data streams are processed by the DDVR, the client, and the segments are buffered according to the segment sequence number. The availability of the Group I staggered streams with stream interval T minimises the start-up latency to be equal to T.


[0074] Alternatively, the user or the client may tap into Stream Ii only and await all of the data in the leading portion to be received by the client before tapping into Stream IIj. After the DDVR has latched onto a Group I stream, the DDVR will immediately look for a suitable Group II stream for merging. In this particular case, each Group II data streams may preferably contain only the remaining portion of the Q data.


[0075] The method on merging of data streams can be found in the DINA technology. After merging, the Group I stream may no longer be needed and the DDVR may then rely solely on Stream IIj for subsequent viewing. This may be the optimised alternative only to minimise network load.


[0076] It should be noted that once the system has started, the user could initiate the following interactive requests, including pause and resume, rewind, and slow motion playback. However, forward and backward jumps may be restricted to jump to any one of the Group I or Group II streams (at any particular time), This problem may be resolved by fine-tuning the parameters of the system. For instance, Group I data streams may be designed to contain content that relatively few people wish to look at, like copyright notices.


[0077] The total number of streams in this type of IVOD system is
4M+RJT.


[0078] The optimal system configuration is calculated to be
5M=N=J=RT,


[0079] and the optimal total number of streams is given by
62RT.



B. Dual Streaming IVOD System (Configuration 2)

[0080] The second example of IVOD system is also characterised by a dual-streaming operation. Again, the content is broken up into K segments of regular length T, and the segments are labelled from I to K respectively. The segments are put onto the staggered streams in a pattern as shown in FIG. 5.


[0081] In this configuration, there are also two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame and the frames are repeated on each stream. In this example, J is again chosen to equal to 16 and T is 30 seconds. This configuration characterises in that one of the Group I data streams, Stream I1, contains only Segment 1 repeated in all time slots. Streams I2 to I9 contain Segment 2 to 17. In another words, Segment 1 may be viewed as a leading data steam containing the leading segment of the leading portion. Segments 2 to 9 may be considered as a plurality of finishing data streams containing the rest of the leading portion in the number of J segments. The Group I stream interval may be chosen to be any desired value, but is again preferably set to be T due to same reason as in Configuration 1. Streams I2 to I9 repeat themselves after JT (i.e. 8 minutes in this example).


[0082] In this particular example, there should be at least a total of
7M=J2+1


[0083] streams in Group I for the smooth merging of the leading data stream and the finishing data stream. M may be less than this value but then the user may suffer from the phenomenon of “dropping frames”. M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.


[0084] Although the leading segment shown in FIG. 5 contains only one leading segment, it should be understood that the leading data stream may contain more than one leading segment, for example, segments 1-4. The above conditions of the Group I anti-latency data streams of this Configuration 2 may then be viewed as T being four times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency. On the other hand, M may be substantially reduced and could be
8M=J8+1


[0085] for the smooth merging of the leading data stream and the finishing data stream. Although this may be less desirable, this may be again a matter of design choice that should be determined by the system administrator.


[0086] For Group II streams, the arrangement and the set up of the streams may be the same as in the previous example, and the same setting and variations is also applicable to this application.


[0087] When a user starts to view the content at time ti, the DDVR at the user end will immediately tap onto Stream I1. The start-up latency should be bounded to T as the leading segment is repeated every time period T. After all data in the leading segment is received, the DDVR will also tap onto one of the Group I finishing data streams, I2 to I9 in this case. For the ease of illustration, Stream Ii is chosen. As an alternative, the DDVR may tap onto the leading data stream and one of the finishing data streams simultaneously if the DDVR is capable of doing so. In the latter case, both streams are processed by the DDVR and the segments are buffered according to the segment sequence number.


[0088] The DDVR will also tap onto one of the Group II streams (in this case Stream II2), The time at which the DDVR taps onto the Group II streams is a matter of choice—it may do so:


[0089] 1. immediately after tapping onto the leading data stream Stream I1


[0090] 2. immediately after tapping onto one of the finishing data streams


[0091] 3. after all data in the leading portion contained in Group I data streams is received by the DDVR.


[0092] Generally, the DDVR should tap onto one of the Group II streams at least right before all data in Group I streams is received or played by the client.


[0093] After all data in the Group I data streams has been buffered and received, the DDVR then merge onto one of the Group II streams. The merging technique is described in the DINA technology. After merging, the Group I stream (i.e. Stream Ii) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing to save bandwidth. Any allowable interactive request received at any time can be entertained as previously shown in the DINA technology.


[0094] The total number of streams in this IVOD system is
9(J2+1)+N.


[0095] As N preferably equals to
10RJT,


[0096] the optimal configuration is given by
11J=2K=2RT


[0097] and the optimal total number of data streams of the system is equal to
122K+1=2RT+1.



C. Dual Streaming IVOD System (Configuration 3)

[0098] The third example of IVOD system is also characterised by a dual-streaming operation with the segments arranged in a hierarchical periodic frame structure with a size based on the Fibonacci numbers. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to N respectively. The segments are put onto the staggered streams in a pattern as shown in FIG. 6. There are also two groups of staggered streams.


[0099] In this configuration, Group I data streams contains the data in the leading portion having J segments. Note that this J is slightly different from those used in Configurations 1 and 2. There are M Group I data streams labelled from 1 to M. For each of the Group I stream Im, where m is an integer representing the stream number, the frame period is given by Fm where Fm is the m-th Fibonacci number. The first few Fibonacci numbers are shown in Table 2. The Fibonacci numbers have the property that Fy=Fy−1+Fy−2, where y is an integer starting from 3. The Group I stream interval is again preferably set to be T as in Configurations 1 and 2. There are 12 Group I streams in this example. For Group II streams, the arrangement and the set up of the streams arc similar to the previous examples, but for the sake of illustration, the Group II streams starting at Segment 81.
2TABLE 2Fibonacci numbers.j123456789101112Fj123581321345589144233


[0100] The principle of operation can best be explained by the following even though many different variations are possible. When a user starts to view the content at time t, the DDVR at the user end will immediately tap onto two Group 1 data Streams I1 and I2. Both Segment 1 from Stream I1 and Segment 2 or 3 from Stream I2 will be buffered. Now there are two segments in the buffer, and Stream I2 has a frame size of 2, Stream I2 can be smoothly merged into using the methodology as described in the DINA technology, Thus, the startup latency should be bounded to T. After Segment 1 has been received, DDVR will tap onto Streams I2 and I3. Since there are only two segments in Stream I2, Segment 3 will either be buffered during the time when Segment 2 is being received, or Segment 3 will be available on Stream I2 immediately following Segment 2's completion. After both Segments 2 and 3 have been received out, the DDVR will tap onto Streams 3 and 4, and the process continues as before. Both streams are processed by the DDVR and the extra segments are buffered according to the segment sequence number.


[0101] In the above discussion, the DDVR is presumed to connect to the 1st and 2nd data streams for starting-up the movie such that the latency is bounded to be T. However, if the user wishes, he may choose to first tap onto the mth and (m+1)th data streams, wherein m is any number larger than 1. The user can still view the content but may be suffering from larger latency. This may be preferred by some users who wish to skip the first few minutes of a movie, for example.


[0102] By constructing the frame period of the streams according to the Fibonacci number Fm, after Stream Im−1 has been received, the DDVR would have buffered at least Fm=Fm−1+Fm−2 time slots. Using the merging methodology as described in the DINA technology, Stream Im−1 can be smoothly merged into Stream Im as the frame size of Stream Im is exactly Fm.


[0103] It is noted that after m segments are received, exactly m more segments would have been buffered because of the dual streaming arrangement. The DDVR preferably begin to merge onto one of the Group II streams, at the very least to save bandwidth, once the number of segments buffered has exceeded the size of the Group II stream interval (in this case 80 segments are needed for an 8-minute Group II stream interval). After merging, the Group I stream (i.e. Stream Ii) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing. Any allowable interactive request received at any time can be entertained as described in the DINA technology,


[0104] There is no optimal parameter for this Configuration. To save bandwidth, there should be no Group II data stream. However, users may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR. Specifically, the user may perform pause, resume, rewind, slow motion, and backward jump, but the user may not be able to perform fast forward and forward jump functions.


[0105] The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. With a given start-up latency T, the total number of streams required in this IVOD system can be found by looking up the necessary frame size from a table containing the relevant Fibonacci numbers. The minimal number of data streams should be M such that
13FM2KN


[0106] for the smooth merging between the individual Group I data streams. M may be less than this value but then the user may suffer from the phenomenon of “dropping frames”. M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.


[0107] Using this technique, the start-up latency T can be as low as 6 seconds (with an average of 3 see), with a Group II stream interval of 8 minutes. The total number of streams required for a 2-hour content can be as low as only 26.


[0108] An alternative arrangement for the Group I streams is shown in FIG. 7. Note that the frame structure of the streams only follows the Fibonacci sequence after Stream 4.



D. Multi-Streaming IVOD System (Configuration 4)

[0109] The previous three examples show several possible implementations of the IVOD systems with dual-streaming. In fact, there are many more possible implementations of the IVOD system, each depending on a different arrangement of the segments in different streams, and on the maximum number of streams that the end user DDVR must simultaneously tap into and process. The above three examples are relatively simple to understand and implement, but the number of streams used are not optimal because of the restriction that only two maximum streams are tapped into and processed at any given time. In the current configuration, a multi-streaming IVOD system with the optimal number of streams is demonstrated.


[0110] This configuration is realized with the assumption that all the streams that carried the content are all tapped into and processed by the end user DDVR. FIG. 8 shows a possible optimal arrangement of the initial thirty segments or so in various streams based on the harmonic series approach. The segments are labelled 1, 2, 3, . . . etc. . . . . The necessary and sufficient condition for guaranteeing the start up latency to be bounded within one slot interval using only an optimal number of streams is that the placement of the segments should be done in such a way that Segment j (i.e. the j-th segment from the beginning of the leading portion) should be repeated in every j time slots or less, for all j from 1 to J. For example, Segment 1 should be repeated in every time slot in order that the start-up latency is bounded within one anti-latency interval T. Therefore, there may be a whole stream taken up by Segment 1 alone. Segment 2 should be repeated in every other time slot in order that the second segment is available immediately after the first segment has been received. Similarly Segment 3 should be repeated in every three time slots and Segment j should be repeated in every j time slots. For j>1, the segment j may be repeated more frequently than required, That is, the jth segment is repeated by an anti-latency time interval jT. Note that the definition of the term “anti-latency time interval” in this Configuration 4 is different from that in Configurations 1 to 3.


[0111] The exact stream where the segments are placed does not matter as we are assuming that all streams are being received and processed by the DDVR. The segments are buffered by the DDVR and rearranged into a suitable order. The unfilled slots in FIG. 9 can contain any data or even be left unfilled.


[0112] As in Configuration 3, there is no optimal parameter for this Configuration, To save bandwidth, there should be no Group II data stream, in which users may may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR, This may not be desirable. The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. The total number M of streams required for carrying the J time slots can be found by summing the harmonic series from 1 to J, such that
14Mj=1j=J(1j).


[0113] This is approximately equal to γ+ln(J), where γ is the Euler's constant (˜0.5772 . . .) when J is large. Even though J can be set to any desired number larger than
15KN,


[0114] for the sake of engineering convenience, it is preferred to have
16J=KN,


[0115] which equals to the number of data segments in the interactive time interval, This is the optimal number of streams required to bind the start-up latency to within one slot interval.


[0116] To create an IVOD system based on this optimal multi-streaming condition, the streams are again divided into two groups, Groups I and II. The segment arrangements of the Group I streams has been shown in FIG. 8. The segment arrangements of the Group II streams are same as those shown in any one of FIGS. 4 to 6. When a user initiates a viewing request, all of the Group I streams should be received and processed by the DDVR. In addition, a suitable Group II stream will also be tapped into and processed. This allows a smooth merging of the Group I streams (where the initial m segments are placed) into a single Group II stream. As an alternative, the tapping onto the Group II stream may await until all data in the leading portion contained in Group I streams is received by the client DDVR.


[0117] After one Group II stream interval (which is again set to be JT intentionally in this case), all the Group I streams may no longer be needed and only a single Group II stream is needed for the continuous viewing by the user. Like before, through the use of a plurality of Group II streams, once the system has started, the user could initiate any of the allowable interactive requests, including pause and resume, rewind, and slow motion playback.


[0118] As in configuration 3, it is possible to create an IVOD system entirely based on the group I streams as illustrated previously. By doing that, the number of streams can be reduced with minimised start-up latency. However, users of such systems may be restricted to limited interactivity, as discussed in Configuration 3. Furthermore, the buffer size at the DDVR must be as large as the entire content, and the processing capability of the DDVR is more demanding for the current configuration. The decision regarding which system to deploy should be left as an option to the service provider.


[0119] It should further be noted that this multi-streaming arrangement may be used to replace the Fibonacci stream sequences (Group I streams) in Configuration 4 to further reduce the number of streams required. The condition is that the DDVR should have enough buffer and processing power to buffer and process the received data. Table 3 in the up-coming section lists some results in all various configurations.


[0120] A non-optimal multi-streaming arrangement known as the logarithmic streaming is shown in FIG. 9.



E. Mixed Dual-Dual/Multi-Dual Streaming IVOD System (Configuration 5)

[0121] Configurations 3 and 4 demonstrate an IVOD system with a very short start-up latency in comparison with Configurations 1 and 2 using a comparable numbers of streams, But Configuration 1 or 2 also has an advantage over Configuration 3 or 4—they allow coarse jumping from stream to stream during the first stream interval while Configuration 3 or 4 does not. In real life, the first few minutes of a content source usually contain a lot of header and information that many users may want to slip by jumping. Therefore, it is desirable to provide at least a limited jump capability for the users.


[0122] By combining Configuration 1 or 2 and 3 or 4, one may create an IVOD system with a limited jump capability even without the help of an external unicast stream. This IVOD system contains three groups of staggered streams, namely, Group I(1) and I(2). Group I(1) data streams has a total number of A data streams responsible for distributing data having C segments. Similarly, Group I(2) data streams has a total number of B data streams responsible for distributing data having D segments, with each of the B data streams being staggered by a coarse jump interval. There are E data segments in the coarse jump interval.


[0123] To give a more concrete example, let us assume a segment size T of 6 seconds. Let Group I(1) contain the first 7 Fibonacci streams as shown in Configuration 3. Let Group I(2) contain the 8 Group I streams as shown in Configuration 1 running from Segment 11 to 90, with a staggered stream interval of 10 segments. Note that Group I(2) can contain data segments running from 1 to 90, although it may seem to be redundant. Accordingly, the frame period of Group I(2) streams is 80 segments or 8 minutes, and this is the coarse-jump frame period allowing the user to perform a coarse-jump interactive when the DDVR is connecting to the Group I data streams. Group II streams of Configuration 5 are identical to the Group II streams of the other configurations. In this particular example, each of the Group II streams starts from Segment 1 and going all the way to the end of the entire content. The arrangement of the streams and segments are shown in FIG. 10.


[0124] With this hierarchical arrangement of streams and segments, it can be seen that the user can start at any time with a start-up latency of one segment (6 seconds in this example), Furthermore, users can coarse jump at any time within the start-up period, the time when the DDVR connects to the Group I streams. The start-up period is preferably defined to be the time within the first Group II stream interval (that is, from the 0-minute point to the 9-minute point) as in previous configurations. Each coarse jump is 1 minute apart from each other, which is determined by the coarse-jump frame period. Thus, the users can skip the headers using this arrangement. The total number of streams needed for holding a two-hour content in the particular example shown in FIG. 10 is 30.


[0125] Although FIG. 10 only shows the combination of Configurations 3 and 1 in Group I data streams, it should be obvious to those skilled in the art that the following combinations are also possible:


[0126] a. Configurations 4 and 1


[0127] b. Configurations 3 and 2


[0128] c. Configurations 4 and 2


[0129] The number of Group I(1) data streams required, i.e. A, may be determined by talking E as
17KN


[0130] in configurations 3 and 4. That is, if Configuration 3 is used in Group I(1), there should be A data streams in Group I(1) such that FA2E. If Configuration 4 is used, then
18Ac=1c=C1c.


[0131] As in Configuration 4, C, the total number of data segments to be transmitted in Group I(1), preferably equals to E. The same considerations on the number of data streams required as in Configurations 3 and 4 may also be applicable to Group I(1).


[0132] The decision regarding which combination to deploy should again be left as an option to the service provider.



Additional Features of Individual Data Segments

[0133] To facilitate the change over of the streams without incurring substantial loss of data during the transition, the beginning of each data segment, which can be termed the head portion, may contain duplicated data appearing in the tail portion of the immediate preceding segment. The amount of data to be carried in the duplicated portion may be T′ (normalized with respect to the data rate of the stream), where T′ is the delay that may incur during the change over of the streams. Typically, T′ may be in the order of 10 - 20 milliseconds.



IVOD System Requirements

[0134] There are several system requirements:


[0135] a. The server needs to generate the appropriate multi-streams in patterns that have been illustrated in any one of Configurations 1 to 5 or such patters as may be designed.


[0136] b. The distribution network should have sufficient capacity to carry all the required streams to the end user DDVR.


[0137] c. The end user DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams. The DDVR should also have sufficient storage to buffer at least one Group II stream interval of data from the multi-streams.


[0138] These factors may affect the service provide in choosing which configuration to deploy.



Concept of Diskless DVR

[0139] Generally, the receiver DDVR may have a processor for raising request for the content, and a connector for connecting the Group I and II data streams.


[0140] For Configurations 1 and 2, it may be necessary for the DDVR to include a buffer for buffering die received Group I data streams. For Configurations 3 and 4, the DDVR should include a buffer for buffering the data received from Group I data streams. The processor will then also be responsible for processing the data to put the data in a proper order.


[0141] With the multi-streaming concept, the receiving device, the receiver, at the user end may not need to have any hard disk storage. The only memory or buffer needed at the STB, the client/receiver, may be the RAM (random-access memory) to buyer one stream interval equivalent of data. Assuming a stream interval of 8 minutes, this requires roughly 60 MB of RAM for a 1 Mb/s MPEG-4 stream, This technique can be contrasted with many VOD techniques that require a large hard disk storage (sometimes as large as 60 GB) at the STB. Therefore, this IVOD system also appears to the users like a diskless DVR. However, the system provider may choose to provide addition storage to the users in the form of hard disk or other non-volatile medium or use such other equipment as may be necessary to buffer and receive the data.


[0142] It should be further noted that there might be several options for the DDVR.


[0143] First, the DDVR may be configured such that it plays the received data at a slower rate than the transmission rate of the data, The transmission rate may be expressed
19ST


[0144] under the condition that each data segment contains same amount of data, In such cases, the DDVR may be required to have a larger buffer size to accommodate the un-received data.


[0145] Secondly, the DDVR may be configured to contain or pre-fetch at least a portion of the data in the Group I data streams, i.e. the leading portion of the data to be transmitted, for a certain period of time in its local buffer. Such data may be termed “pre-fetched data”. If desired, the pre-fetched data may contain all of the data contained in the Group I data streams provided that the DDVR has adequate buffer size, In one extreme, the content of the data to be transmitted may be refreshed every day for video data, or more than once per day. In this particular example, it may be necessary for the pre-fetched data to be refreshed every day. The refresh time may be set at any desired value that may range from one day to even one year. It may be preferable to refresh the pre-fetched data during an off-peak period, like after midnight (for instance, from 01:00-06:00), or between 10:00 to 15:00, wherein the network activities resulting from clients' requests may be at a minimum. This process may be initiated by the anti-latency signal generator, the interactive signal generator, or by the client itself by a routine call procedure. In doing so, the latency time and the total number of data streams required in the network may be further reduced. This may be particularly important for VOD systems transmitting a large number of sets of data.



Trade-off of Space-Time-Bandwidth

[0146] There is a trade-off relationship for different configurations of the IVOD systems of this invention among buffer storage at DDVR (space), start-up latency (time) and streams (transmission bandwidth) required. This is shown in Table 3 and further illustrated in FIG. 13.


[0147] In FIG. 13, the Vertex 1 may be realised as current VOD systems with all the data being sent and then stored in the STB, whether the client raises a request for the data or not, In such a case, the STB should have a relatively large buffer size. This may increase the manufacturing costs of the STB.


[0148] Vertex 2 may represent the systems as described in Configurations 1-5. Under such a configuration, the requirement on the STB may be minimal while the system may be more demanding on the bandwidth.


[0149] Vertex 3 may represent a hybrid system of Vertexes 1 and 2.


[0150] The decision on which “Vertex” to choose may be a matter of design choice depending on various factors including the bandwidth available, the specification of the STB, local requirements on latency and interactivity, and so on.
3TABLE 3Tradeoff among Buffer Storage (Space), Startup Latency (Time) andStreams (Transmission Bandwidth) RequiredNumber of Streams RequiredStaggered Interval6 min7 min8 min10 min15 minContent Size L = 1 hrDual-StreamingConfiguration (1) T = 30 sec2223242634(coarse jump = 1 minute)Configuration (2) T = 30 sec1717171720(coarse jump = 2 minutes)Configuration (3) T = 6 sec2019181716(no coarse jump allowed)Configuration (5) T = 6 sec2323232326(coarse jump = 1 minute)Configuration (5) T = 6 sec2222212021(coarse jump = 2 minute) Multi-StreamingOptimal Configuration = T 6 sec1514131210Configuration (4)(no coarse jump allowed)Optimal Configuration T = 6 sec2020202023(coarse jump = 1 minute)Optimal Configuration T = 6 sec1818171617(coarse jump = 2 minute)Content Size L = 2 hrDual-StreamingConfiguration (1) T = 30 sec3231313238(coarse jump = 1 minute)Configuration (2) T = 30 sec2725242324(coarse jump = 2 minute)Configuration (3) T = 6 sec3027262320(no coarse jump allowed)Configuration (5) T = 6 sec3331302932(coarse jump = 1 minute)Configuration (5) T = 6 sec3230282625(coarse jump = 2 minute)Multi-StreamingOptimal Configuration T = 6 sec2522201814Configuration (4)(no coarse jump allowed)Optimal Configuration T = 6 sec3129272728(coarse jump = 1 minute)Optimal Configuration T = 6 sec2826242221(coarse jump = 2 minute)



Application to Cable, Satellite and Terrestrial Broadcasting Systems

[0151] The IVOD systems of this invention may find immediate applications in existing cable TV, terrestrial broadcasting, and satellite broadcasting systems. With very little modification on the existing infrastructure, the non-interactive broadcasting, or NVOD systems may be converted into an IVOD system. Both analogue and digital transmission systems can take advantage of the multi-steaming concept, However, the discussions below will only describe system configurations for digital transmission systems.


[0152] In these digital broadcasting systems, the RF transmission bands are usually divided into 6 MHz (NTSC) or 8 MHz (PAL) channels. There can be over a hundred channels in cable TV, terrestrial or satellite broadcasting system. FIG. 11 shows a typical system configuration for this multi-streaming system. It is very similar to existing broadcasting system. Only the transmission unit at the head end, which may be called an anti-latency device, and reception unit at the user end, the client/receiver, may need to be modified. At the head end, instead of sending analog signals in each channel, digital signals such as QAM are transmitted. Typically, one can put in 30 -40 Mb/s into an RF channel. Assuming a 2-hour content, one can first use MPEG-4 or other compression algorithms to convert the analog signal into a digital stream with a bit rate of roughly 1 Mb/s. Using the Fibonacci dual-streaming (Configuration 3) or the optimal harmonic multi-streaming IVOD concept (Configuration 4), one can place 30 to 40 streams of the IVOD streams into a single RF channel. The contents are put into different RF channels according to the PAL/NTSC/SECAM standard to maintain compatibility with the existing broadcasting system, and each RF channel can contain a few hours of contents.


[0153] At the user end, the set top box should be RF-tuned to the particular RE channel of interest. Then the cable modem would filter out the 30 - 40 Mb/s digital streams and decode two streams at a time (for Fibonacci dual-streaming systems) or decode all the harmonic multi-streams (for harmonic multi-streaming systems). FIG. 12 shows the block diagram of the STB/cable modem, The STB/cable modem is similar to other STB/cable modems except for its processing unit which can process at least 2 multi-streams simultaneously rather than a single stream. The decoded streams would be buffered in the STB and the content would be reconstructed according to the sequence number of the segments. With the hundreds of channels available in a typical broadcasting system, this translates to over 200 hours or more of fully interactive programs available to an infinite number of users,


[0154] While the preferred embodiment of the present invention has been described in detail by the examples, it is apparent that modifications and adaptations of the present invention will occur to those skilled in the art. It is to be expressly understood, however, that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. Furthermore, the embodiments of the present invention shall not be interpreted to be restricted by the examples or figures only.


Claims
  • 1. A method for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including the steps of: generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
  • 2. The method of claim 1, wherein: said data is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams; and the interactive data streams includes N interactive data streams.
  • 3 The method of claim 1, wherein: the anti-latency data streams contains the leading portion of said data only; the interactive data streams contains a whole set of said data.
  • 4. The method of claim 2, wherein: each of the M anti-latency data stream contains substantially identical data repeated continuously within said anti-latency data stream, and wherein each successive anti-latency data stream is staggered by an anti-latency tie interval; and each of the N interactive data stream repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
  • 5. The method of claim 4, wherein: each of the M anti-latency data stream has J segments; and the anti-latency time interval T.
  • 6. The method of claim 4, wherein the interactive time interval JT.
  • 7. The method of claim 5, wherein MJ.
  • 8 The method of claim 7, wherein M=J.
  • 9. The method of claim 6, wherein
  • 10. The method of claim 9, wherein
  • 11. The method of claim 8 or 10, wherein
  • 12. The method of claim 4, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 13. The method of claim 4, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 14. The method of claim 4, further including the steps of: connecting the client to any one of the M anti-latency data streams when the client raises a request for said data; and connecting the client to any one of the N interactive data streams.
  • 15. The method of claim 2, wherein the anti-latency data streams includes: I, a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams; containing the rest of the leading portion of said data; and being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency tine interval; each of the N interactive data streams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
  • 16. The method of claim 15, wherein each of the finishing data stream has J segments; and the anti-latency time interval T.
  • 17. The method of claim 15, wherein the interactive time interval JT.
  • 18. The method of claim 16, wherein
  • 19. The method of claim 18, wherein
  • 20. The method of claim 17, wherein
  • 21. The method of claim 20, wherein
  • 22. The method of claim 19 or 21, wherein J ={square root}{square root over (2K)}.
  • 23. The method of claim 15, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 24, The method of claim 15, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 25. The method of claim 15, further including the steps of: connecting the client to the leading data stream when the client raises a request for said data; subsequently connecting the client to any one of the finishing data streams; and connecting the client to any one of the N interactive data streams.
  • 26. The method of claim 2, wherein: each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval 27interactive⁢ ⁢time⁢ ⁢interval=KTN;the anti-latency data streams 1 to M are generated such that an mth anti-latency data stream has Fm segments, wherein Fm is an mth Fibonacci number; and the Fm segments are repeated continuously within the mthanti-latency data stream.
  • 27. The method of claim 26, further including the steps of: connecting the client to at least the mth and (m+1)th anti-latency data streams when the client raises a request for said data; buffering the data in at least the mth and (m+1)th anti-latency data streams in the client; subsequently connecting the client to successive anti-latency data streams; and repeating the previous steps until all data in the leading portion is received by the client.
  • 28. The method of claim 27, further including the step of: connecting the client to any one of the N interactive data streams after all data in the leading portion is received by the client.
  • 29. The method of claim 26, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 30. The method of claim 26, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 31. The method of claim 26, wherein
  • 32. The method of claim 26, wherein m starts from 1
  • 33. The method of claim 26, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
  • 34. The method of claim 2, wherein: each of the N interactive data steams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval 29interactive⁢ ⁢time⁢ ⁢interval=KTN;in the M anti-latency data streams, I. the leading portion of said data contains 1 to J leading data segments labeled; and II. the leading data segments are distributed in the M anti-latency data streams such that an jth leading segment is repeated by an anti-latency time interval jT within the anti-latency data streams.
  • 35. The method of claim 34, further including the steps of: connecting the client to all of the M anti-latency data streams when the client raises a request for said data; and buffering the leading portion of said data in the M anti-latency data streams in the client.
  • 36. The method of claim 35, further including the step of: connecting the client to any one of the N interactive data streams after all data in the leading portion is received by the client.
  • 37. The method of claim 34, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 38. The method of claim 34, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 39. The method of claim 34, wherein
  • 40. The method of claim 34 wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows:
  • 41. The method of claim 2, wherein the M anti-latency data streams contains the leading portion of said data; and further includes two batches of data streams being a 1st set of anti-latency data streams and a 2 nd set of anti-latency data streams.
  • 42. The method of claim 41, wherein: the 1st anti-latency data streams have A 1st at anti-latency data streams from l to A, wherein I. an ath anti-latency data stream has Fa segments, and Fa is an ath Fibonacci number, and II. the Fa segments are repeated continuously within the at 1st anti-latency data stream the 2nd anti-latency data streams have B 2nd anti-latency data streams wherein each of the B 2nd anti-latency data streams contains substantially identical data repeated continuously within said 2nd anti-latency data stream, and wherein each successive 2nd anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump function when the client is connected to the B 2nd anti-latency data steam.
  • 43. The method of claim 42, further including the steps of: connecting the client to at least the ath and (a+1)th 1st anti-latency data streams when the client raises a request for said data; buffering the data in at least the ath and (a+1)th 1st anti-latency data streams in the client; subsequently connecting the client to successive 1st anti-latency data streams; repeating the previous steps until all data in the A 1st anti-latency data streams is received by the client.
  • 44. The method of claim 43, further including the steps of: connecting the client to any one of the B 2nd anti-latency data streams after all data in the 1st anti-latency data streams is received by the client; and connecting the client to anyone of the N interactive data streams after all data in the connected B 2nd anti-latency data stream is received by the client.
  • 45. The method of claim 42, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 46. The method of claim 42, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 47. The method of claim 42, wherein said coarse-jump frame period includes E data segments, and FA2E.
  • 48. The method of claim 42, wherein a starts from 1.
  • 49. The method of claim 42, wherein a starts from 4 and the repeating 1st, 2nd, and 33anti-latency data streams have the following configuration:
  • 50. The method of claim 41, wherein: the 1st anti-latency data streams have A 1st anti-latency data streams from 1 to A, wherein I. an ath anti-latency data stream has Fa segments, wherein Fa is an ath Fibonacci number; and II. the Fa segments are repeated continuously within the ath 1st anti-latency data stream the 22 anti-latency data streams have B 2nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams: containing the rest of the leading portion of said data; and being repeated continuously within said finishing data stream, and wherein each successive finishing data steam is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data streams.
  • 51. The method of claim 50, further including the steps of: connecting te client to at least the ath and (a+1)th 1st anti-latency data streams when the client raises a request for said data; buffering the data in at least the ath and (a+1)th 1st anti-latency data streams in the client; subsequently connecting the client to successive 1st anti-latency data streams; repeating the previous steps until all data in the A 1st anti-latency data streams is received by the client.
  • 52. The method of claim 51, further including the steps of: connecting the client to the leading data stream after all data in the 1st anti-latency data streams is received by the client; subsequently connecting the client to any one of the finishing data streams; and connecting the client to anyone of the N interactive data streams after all data in the B 2nd anti-latency data streams is received by the client.
  • 53. The method of claim 50, wherein each of the N interactive data steams contains the whole set of said data having K segments.
  • 54. The method of claim 50, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 55. The method of claim 50, wherein said coarse-jump frame period includes E data segments, and FA2E.
  • 56. The method of claim 50, wherein a starts from 1.
  • 57. The method of claim 50, wherein a starts from 4 and the repeating 1st, 2nd, and 3rd data streams of the A 1st anti-latency data streams have the following configuration:
  • 58. The method of claim 41, wherein: the 1st anti-latency data streams have A 1st anti-latency data streams, wherein, I. the A 1st anti-latency data streams contains 1 to C 1st data segments; and II. the 1 st data segments are distributed in the A 1st anti-latency data streams such that an cth leading segment is repeated by an anti-latency time interval cT within the A 1st anti-latency data streams; the 2nd anti-latency data streams have B 2nd anti-latency data streams, wherein each of the B 2nd anti-latency data streams contains substantially identical data repeated continuously within said 2nd anti-latency data stream, and wherein each successive 2nd anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data stream.
  • 59. The method of claim 58, further including the steps of: connecting the client to all of the A 1st anti-latency data streams when the client raises a request for said data; and buffering data in the A 1 st anti-latency data streams in the client until all data in the A 1st anti-latency data streams is received by the client.
  • 60. The method of claim 59, further including the steps of: connecting the client to any one of the B 2nd anti-latency data streams after all data in the 1st anti-latency data streams is received by the client; and connecting the client to anyone of the N interactive data streams after all data in the connected B 2nd anti-latency data stream is received by the client.
  • 61, The method of claim 58, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 62, The method of claim 58, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 63. The method of claim 58, wherein said coarse-jump frame period includes E data segments, and
  • 64. The method of claim 58, wherein six of the A 1st anti-latency data streams are arranged as follows:
  • 65. The method of claim 41, wherein: the 1st anti-latency data streams have A 1st anti-latency data steams, wherein, I. the A 1st anti-latency data streams contains 1 to C 1st data segments; and II. the data segments I are distributed in the A 1st anti-latency data streams such that an cth leading segment is repeated by an anti-latency time interval cT within the A 1st anti-latency data streams; the 2nd anti-latency data streams have B 2nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams: connecting the rest of the leading portion of said data; and being repeated continuously with said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data streams.
  • 66. The method of claim 65, further including the steps of: connecting the client to all of the A 1st anti-latency data streams when the client raises a request for said data; and buffering data in the A 1st is anti-latency data streams in the client until all data in the A 1st anti-latency data streams is received by the client.
  • 67. The method of claim 66, further including the steps of: connecting the client to the leading data stream of the B 2nd anti-latency data streams after all data in the 1st anti-latency data streams is received by the client; subsequently connecting the client to any one of the finishing data streams; and connecting the client to anyone of the N interactive data streams after all data in the B 2nd anti-latency data stream connected in step F is received by the client.
  • 68. The method of claim 65, wherein each of the N interactive data streams contains the whole set of said data having K segments.
  • 69. The method of claim 65, wherein each of the N interactive data streams contains the remaining portion of said data only.
  • 70. The method of claim 65, wherein said coarse-jump frame period includes E data segments, and
  • 71. The method of claim 67, wherein six of the A 1st anti-latency data steams are arranged as follows:
  • 72. The method of any one of claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
  • 73. The method of any one of claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, further including the step of pre-fetching at least a portion of data in the leading portion in the client.
  • 74. A method for transmitting data over a network to at least one client including the step of fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contain a head portion and a tail portion, and the head portion contains a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
  • 75. A method for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including the steps of: generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client; pre-fetching the leading portion in the client as pre-fetched data, and generating at least one interactive data steam containing at least a remaining portion of said data for the client to merge into the leading portion.
  • 76. The method of claim 75 further including the step of refreshing the pre-fetched data during a off-peak period.
  • 77. The method of claim 76, wherein the refresh time period is an off-peak period.
  • 78. The method of claim 76, wherein pre-fetched data is refreshed once per day.
  • 79. A method for transmitting data over a network to at least one client including the steps of generating a plurality of anti-latency data streams, the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each of the finishing data streams: containing at least the rest of the leading portion of said data; and repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
  • 80. The method of claim 79 further including the steps of: connecting the client to the leading data stream when the client raises a request for said data; and subsequently connecting the client to any one of the finishing data streams.
  • 81. The method of claim 79, wherein said data is fragmented into K segments each requiring a time T to transmit over the network and the anti-latency time intervals T.
  • 82. A method for transmitting data over a network to at least one client including the steps of: generating M anti-latency data streams from 1 to M, wherein an mth anti-latency data stream has Fm segments, and Fm is an mth Fibonacci number; and wherein said Fm segments are repeated continuously within the mth anti-latency data stream.
  • 83. The method of claim 82 further including the steps of: connecting the client to at least the mth and (m+1)th anti-latency data streams when the client raises a request for said data; buffering the data in at least the mth and (m+1)th anti-latency data streams in the client; subsequently connecting the client to successive anti-latency data streams; and repeating the previous steps until all data in the leading portion is received by the client.
  • 84. The method of claim 82, wherein m starts from 1.
  • 85. The method of claim 82, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
  • 86. A method for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network, including the steps of: generating M anti-latency data streams containing 1 to K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an kth leading segment is repeated by an anti-latency time interval kT within the anti-latency data streams.
  • 87. The method of claim 86 further including the steps of: connecting the client to all of the M anti-latency data streams; and buffering said data in the M anti-latency data streams in the client when the client raises a request for said data.
  • 88. The method of claim 86, wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows:
  • 89. A method for receiving data being transmitted over a network to at least one client according to claim 2, including the steps of: raising a request for said data; and connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
  • 90. The method of claim 89 further including the steps of connecting the client to the N interactive data streams after all data in the M anti-latency data streams is received by the client.
  • 91. The method of claim 89, wherein data in the leading portion is received sequentially.
  • 92. The method of claim 89, wherein the client connects to at least two of the anti-latency data streams simultaneously.
  • 93. The method of claim 92 further including the steps of: buffering data in the two anti-latency data streams connected to the client that is received by the client sequentially.
  • 94. The method of claim 89, wherein the client connects to all of the anti-latency data steams simultaneously.
  • 95. The method of claim 94 further including the steps of: buffering data in the anti-latency data streams connected in the client; and rearranging the buffered data according to a proper sequence.
  • 96. The method of claim 89 further including the step of pre-fetching at least a portion of data in the M anti-latency data streams in the client as pre-fetched data.
  • 97. The method of claim 96 further including the step of refreshing the pre-fetched data during a refresh time period.
  • 98. The method of claim 97, wherein the refresh time period is 01:00-06:00.
  • 99. The method of claim 97, wherein X refresh time period is 10:00-15:00.
  • 100. A method for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream including the steps of: pre-fetching the leading portion in the client as pre-fetched data; and merging the pre-fetched data to the remaining portion.
  • 101. The method of claim 100 further including the step of refreshing the pre-fetched data during a refresh time period.
  • 102. The method of claim 101, wherein the refresh time period is an off-peak period.
  • 103. The method of claim 101, wherein pre-fetched data is refreshed once per day.