An embodiment of the invention generally relates to sending video content from multiple queues at a server to client devices.
Years ago, computers were isolated devices that did not communicate with each other. But, computers are increasingly being connected together in networks. One use of this connectivity is for the real-time and near real-time audio and video transmission over networks, such as networks that use the Internet Protocol (IP) and provide video-on-demand. One of the challenges facing IPTV (Internet Protocol Television) implementations and video-on-demand is the difficulty of scheduling computational and network bandwidth and avoiding video “stuttering” that occurs as a delivery network approaches saturation. Traditional methods of broadcast delivery in an IP network use a technique known as “pull technology” where the client (e.g., a set-top box, personal computer, web browser, or television set) requests the video content asynchronously, which results in latency that varies geometrically with utilization. Latency is generally represented as L=1/(1−M), where L is the latency factor and M is the utilization as a percentage of available bandwidth. As a result, e.g., data packets traveling in a network that is using 50% of its bandwidth take nearly twice as long to arrive as those in a 1% utilized network. Occasional latency and jitter may be acceptable in traditional IP applications (e.g., web browsing or file transfer), but a more reliable delivery method is required for transmitting real-time data such as on-demand video.
Another challenge facing video-on-demand and IPTV (Internet Protocol Television) implementations is that the introduction of a video content load into a network provides high spikes of network utilization. When networks are driven into periodic high (e.g., 90-100 percent) utilization, the chances for network congestion, errors, packet loss, and overloading increases significantly.
Currently, traffic shaping is the primary means to alleviate the effects of high network use and enable greater utilization of network resources. But, current traffic shaping algorithms (e.g., leaky bucket or token bucket) do their work after data has already entered the network. As a result, current traffic shaping algorithms may drop data (if too much data is entering a network link), requiring retransmission, and they may introduce latency (via queuing delays). These effects introduce stutter into the stream received by the client device of the customer. To eliminate stuttering, client devices often buffer the data stream until enough data has been received to reliably cover up any subsequent interruptions in the stream. But, buffering introduces a noticeable delay when changing between streams, which may be acceptable when browsing the internet for video clips, but the typical television viewer expects to be able to flip through many channels with little or no delay. To compete with cable television delivery, Internet television implementations must provide clear, uninterrupted transmission and must permit very fast channel changing, which is not provided by current technology.
Thus, what is needed is an enhanced technique for the delivery of audio/video data in a network.
A method, apparatus, system, and storage medium are provided. In an embodiment, a content server has multiple queues, each of which includes records. Each record in a queue represents a frame in a logical group. Each of the queues transitions between a control state, an ingestion state, and a distribution state. During the control states, records are added to the queues. During the ingestion states, the frames are copied into memory at the content server. During the distribution states, the content server sends the logical groups to a client. Each of the control state, the ingestion state, and the distribution state has a time duration equal to the amount of time needed to play the logical group.
Various embodiments of the present invention are hereinafter described in conjunction with the appended drawings:
It is to be noted, however, that the appended drawings illustrate only example embodiments of the invention, and are therefore not considered limiting of its scope, for the invention may admit to other equally effective embodiments.
In an embodiment, a content server has multiple queues, each of which includes records. Each record in a queue represents a frame in a logical group. Each of the queues transitions between a control state, an ingestion state, and a distribution state. During the control state, records are added to the queue, and commands received from target clients are processed. During the ingestion state, the content of the frames is copied (e.g., from local storage, remote storage, or from a computer system attached to a network) into memory at the content server. During the distribution state, the content server sends the respective logical groups to their respective target clients via the network. Each of the control state, the ingestion state, and the distribution state has a time period, or duration, equal to the amount of time needed to play the logical group at the client. In this way, an embodiment of the invention transmits the frame content in units of logical groups, which eliminates the need for complex session handling between the target clients and the content server. Further, the transmission of logical groups of frames within the time periods provides predictable network bandwidth consumption, such that an embodiment of the invention drives the network load higher than does conventional asynchronous network traffic.
Referring to the Drawings, wherein like numbers denote like parts throughout the several views,
The major components of the content server computer system 100 include one or more processors 101, a main memory 102, a terminal interface 111, a storage interface 112, an I/O (Input/Output) device interface 113, and communications/network interfaces 114, all of which are coupled for inter-component communication via a memory bus 103, an I/O bus 104, and an I/O bus interface unit 105.
The computer system 100 contains one or more general-purpose programmable central processing units (CPUs) 101A, 101B, 101C, and 101D, herein generically referred to as the processor 101. In an embodiment, the computer system 100 contains multiple processors typical of a relatively large system; however, in another embodiment the computer system 100 may alternatively be a single CPU system. Each processor 101 executes instructions stored in the main memory 102 and may include one or more levels of on-board cache.
The main memory 102 is a random-access semiconductor memory for storing or encoding data and programs. In another embodiment, the main memory 102 represents the entire virtual memory of the computer system 100, and may also include the virtual memory of other computer systems coupled to the computer system 100 or connected via the network 130. The main memory 102 is conceptually a single monolithic entity, but in other embodiments the main memory 102 is a more complex arrangement, such as a hierarchy of caches and other memory devices. For example, memory may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors. Memory may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures.
The main memory 102 stores or encodes programs 150, queues 152, a client state controller 154, and a distribution controller 156. Although the programs 150, the queues 152, the client state controller 154, and the distribution controller 156 are illustrated as being contained within the memory 102 in the computer system 100, in other embodiments some or all of them may be on different computer systems and may be accessed remotely, e.g., via the network 130. The computer system 100 may use virtual addressing mechanisms that allow the computer programs of the computer system 100 to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities. Thus, while the programs 150, the queues 152, the client state controller 154, and the distribution controller 156 are illustrated as being contained within the main memory 102, these elements are not necessarily all completely contained in the same storage device at the same time. Further, although the programs 150, the queues 152, the client state controller 154, and the distribution controller 156 are illustrated as being separate entities, in other embodiments some of them, portions of some of them, or all of them may be packaged together.
The programs 150 may include frames of video, audio, images, data, control data, formatting data, or any multiple or combination thereof, capable of being played or displayed via the user I/O devices 121. The programs 150 are further described below with reference to
In an embodiment, the client state controller 154 and the distribution controller 156 include instructions capable of executing on the processor 101 or statements capable of being interpreted by instructions that execute on the processor 101, to carry out the functions as further described below with reference to
The memory bus 103 provides a data communication path for transferring data among the processor 101, the main memory 102, and the I/O bus interface unit 105. The I/O bus interface unit 105 is further coupled to the system I/O bus 104 for transferring data to and from the various I/O units. The I/O bus interface unit 105 communicates with multiple I/O interface units 111, 112, 113, and 114, which are also known as I/O processors (IOPs) or I/O adapters (IOAs), through the system I/O bus 104. The system I/O bus 104 may be, e.g., an industry standard PCI (Peripheral Component Interconnect) bus, or any other appropriate bus technology.
The I/O interface units support communication with a variety of storage and I/O devices. For example, the terminal interface unit 111 supports the attachment of one or more user I/O devices 121, which may include user output devices (such as a video display device, speaker, and/or television set) and user input devices (such as a keyboard, mouse, keypad, touchpad, trackball, buttons, light pen, or other pointing device).
The storage interface unit 112 supports the attachment of one or more direct access storage devices (DASD) 125, 126, and 127 (which are typically rotating magnetic disk drive storage devices, although they could alternatively be other devices, including arrays of disk drives configured to appear as a single large storage device to a host). The contents of the main memory 102 may be stored to and retrieved from the direct access storage devices 125, 126, and 127, as needed.
The I/O device interface 113 provides an interface to any of various other input/output devices or devices of other types, such as printers or fax machines. The network interface 114 provides one or more communications paths from the computer system 100 to other digital devices (e.g., the remote disk drive 134) and the client computer systems 135 and 136; such paths may include, e.g., one or more networks 130.
Although the memory bus 103 is shown in
In various embodiments, the computer system 100 may be a multi-user “mainframe” computer system, a single-user system, or a server or similar device that has little or no direct user interface, but receives requests from other computer systems (clients). In other embodiments, the computer system 100 may be implemented as a personal computer, portable computer, laptop or notebook computer, PDA (Personal Digital Assistant), tablet computer, pocket computer, telephone, pager, automobile, teleconferencing system, appliance, or any other appropriate type of electronic device.
The network 130 may be any suitable network or combination of networks and may support any appropriate protocol suitable for communication of data and/or code to/from the computer system 100, the remote disk drive 134, and the client computer systems 135 and 136. In various embodiments, the network 130 may represent a storage device or a combination of storage devices, either connected directly or indirectly to the computer system 100. In an embodiment, the network 130 may support the Infiniband architecture. In another embodiment, the network 130 may support wireless communications. In another embodiment, the network 130 may support hard-wired communications, such as a telephone line or cable. In another embodiment, the network 130 may support the Ethernet IEEE (Institute of Electrical and Electronics Engineers) 802.3x specification. In another embodiment, the network 130 may be the Internet and may support IP (Internet Protocol).
In another embodiment, the network 130 may be a local area network (LAN) or a wide area network (WAN). In another embodiment, the network 130 may be a hotspot service provider network. In another embodiment, the network 130 may be an intranet. In another embodiment, the network 130 may be a GPRS (General Packet Radio Service) network. In another embodiment, the network 130 may be a FRS (Family Radio Service) network. In another embodiment, the network 130 may be any appropriate cellular data network or cell-based radio network technology. In another embodiment, the network 130 may be an IEEE 802.11B wireless network. In still another embodiment, the network 130 may be any suitable network or combination of networks. Although one network 130 is shown, in other embodiments any number of networks (of the same or different types) may be present.
The client computer systems 135 and 136 may be implemented as set-top boxes, digital video recorders (DVRs), or television sets and may include some or all of the hardware components previously described above as being included in the content server computer system 100. The client computer systems 135 and 136 are connected to the user I/O devices 121, on which the content of the programs 150 may be displayed, presented, or played.
It should be understood that
The various software components illustrated in
Moreover, while embodiments of the invention have and hereinafter will be described in the context of fully-functioning computer systems, the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and the invention applies equally regardless of the particular type of signal-bearing medium used to actually carry out the distribution. The computer programs defining the functions of this embodiment may be delivered to the content server computer system 100 and/or the client computer systems 135 and 136 via a variety of tangible signal-bearing media that may be operatively or communicatively connected (directly or indirectly) to the processor or processors, such as the processor 101. The signal-bearing media may include, but are not limited to:
(1) information permanently stored on a non-rewriteable storage medium, e.g., a read-only memory device attached to or within a computer system, such as a CD-ROM readable by a CD-ROM drive;
(2) alterable information stored on a rewriteable storage medium, e.g., a hard disk drive (e.g., DASD 125, 126, or 127), the main memory 102, CD-RW, or diskette; or
(3) information conveyed to the server computer system 100 by a communications medium, such as through a computer or a telephone network, e.g., the network 130.
Such tangible signal-bearing media, when encoded with or carrying computer-readable and executable instructions that direct the functions of the present invention, represent embodiments of the present invention.
Embodiments of the present invention may also be delivered as part of a service engagement with a client corporation, nonprofit organization, government entity, internal organizational structure, or the like. Aspects of these embodiments may include configuring a computer system to perform, and deploying computing services (e.g., computer-readable code, hardware, and web services) that implement, some or all of the methods described herein. Aspects of these embodiments may also include analyzing the client company, creating recommendations responsive to the analysis, generating computer-readable code to implement portions of the recommendations, integrating the computer-readable code into existing processes, computer systems, and computing infrastructure, metering use of the methods and systems described herein, allocating expenses to users, and billing users for their use of these methods and systems.
In addition, various programs described hereinafter may be identified based upon the application for which they are implemented in a specific embodiment of the invention. But, any particular program nomenclature that follows is used merely for convenience, and thus embodiments of the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
The exemplary environments illustrated in
The respective content servers 100-1 and 100-2 include respective distribution controllers 156-1 and 156-2 and respective programs 150-1 and 150-2. The distribution controllers 156-1 and 156-2 are examples of the distribution controller 156 (
A frame represents material or data that may be presented via the user I/O device 121, at any one time. For example, if the frames include video, then a frame is a still image, and displaying the still images of the frames in succession over time (displayed in a number of frames per second), in frame number order (play order of the frames), creates the illusion to the viewer of motion or a moving picture. Frames per second (FPS) is a measure of how much information is used to store and display motion video. Frames per second applies equally to film video and digital video. The more frames per second, the smoother the motion appears. Television in the United States, for example, is based on the NTSC (National Television System Committee) format, which displays 30 interlaced frames per second while movies or films commonly display 24 frames per second.
But, in other embodiments, any number of frames per second and any appropriate format or standard for storing and presenting the programs 150-1 may be used. Embodiments of the invention may include video only, video and audio, audio only, or still images. Examples of various standards and formats in which the frames may be stored include: PAL (Phase Alternate Line), SECAM (Sequential Color and Memory), RS170, RS330, HDTV (High Definition Television), MPEG (Motion Picture Experts Group), DVI (Digital Video Interface), SDI (Serial Digital Interface), MP3, QuickTime, RealAudio, and PCM (Pulse Code Modulation).
In other embodiments, the frames represent network frames, which are blocks of data that are transmitted together across the network 130, and multiple network frames may be necessary to compose one movie or television frame. The content of the frames may include movies, television programs, educational programs, instructional programs, training programs, audio, video, advertisements, public service announcements, games, text, images, or any portion, combination, or multiple thereof. In addition to the displayable or presentable data, the frames may also include other information, such as control information, formatting information, timing information, frame numbers, sequence numbers, and identifiers of the programs and/or target clients.
The frame numbers represent the sequence or play order that the frames are to be presented or displayed via user I/O device 121, but the frames may be transmitted across the network 130 in a different order (a transmission order) and re-ordered to the displayable or playable order by the target client device 135 or 136.
The frames are organized into logical groups 310-0, 310-1, 310-2, 310-3, and 310-4. The logical group 310-0 includes frames 305-0, 305-1, 305-2, and 305-3. The logical group 310-1 includes frames 305-4, 305-5, 305-6, and 305-7. The logical group 310-2 includes frames 305-8, 305-9, 305-10, and 305-11. The logical group 310-3 includes frames 305-12, 305-13, 305-14, and 305-15. Logical groups are the units of the programs 150 that the content server 100 transmits to any one target client at any one time (during the time period or amount of time between time reference points, as further described below with reference to
In an embodiment, the number of frames in a logical group is the display frame rate (the number of frames per second displayed at the I/O device 121 multiplied by the round trip latency of the logical group when transferred between the content server 100 and the target client. The round trip latency is the amount of time needed for the distribution controller 156 to send a logical group of frames to the target client and receive an optional acknowledgment of receipt of the logical group from the target client.
The content identifier 426 identifies the content that is represented by the respective record. The frame identifier 428 identifies the frame of the content. The pointer 430 includes the address of the location of the content 426. In various embodiments, the pointer 430 may point to an address in the memory 102 (as illustrated in the records 418, 420, 422, and 424), an address within secondary storage that is local to the content server 100 (as illustrated in the records 410, 412, 414, and 416), such as in the storage devices 125, 126, or 127, or may point to an address within secondary storage that is remote to the content server 100 (as illustrated in the records 402, 404, 406, and 408), e.g., the remote secondary storage 134 connected to the content server computer system 100 via the network 130. The client identifier 432 identifies the target client device (e.g., the client devices 135 or 136) that is to receive the content identified by the respective record.
The queue 152-1A further includes a state field 434, which includes example state contents 436. The example state contents 436 identifies the control state, which is the state of the queue during which the distribution controller 156 processes control commands, such play, fast forward, rewind, pause, or skip commands and during which the distribution controller 156 adds records to the queue 152-1A that represent the content requested by the commands. The processing that the distribution controller 156 performs while the queue 152-1A is in the control state 436 is further described below with reference to
As illustrated in
The queue 152-1B further includes a state field 434, which includes example state contents 536. The example state contents 536 identifies the ingestion state, which is the state of the queue during which the distribution controller 156 copies the content identified by the content identifier 426 into the memory 102. The processing that the distribution controller 156 performs while the queue 152-1B is in the ingestion state 536 is further described below with reference to
As illustrated in
The queue 152-1C further includes a state field 434, which includes example state contents 636. The example state contents 636 identifies the distribution state, which is the state of the queue during which the distribution controller 156 sends the logical groups from the memory 102 to the client device 135 and/or 136 identified by the client identifier 432. The processing that the distribution controller 156 performs while the queue 152-1C is in the distribution state 636 is further described below with reference to
As illustrated in
While the queue 152 is in the control state 436, the distribution controller 156 processes the commands 735 received from the client. In various embodiments, the commands may be received directly or indirectly (routed from an unillustrated control server or other computer system) from the client. The distribution controller 156 then performs the state transition 720, so that the queue 152 transitions to the ingestion state 536. While the queue 152 is in the ingestion state 536, the distribution controller 156 copies program content identified by the queue records into the memory 102 (if not already present in the memory 102). The distribution controller 156 then performs the state transition 725, so that the queue 152 transitions to the distribution state 636. While the queue 152 is in the distribution state 636, the distribution controller 156 sends logical groups of program content frames (identified by the queue records) to the respective target clients. The distribution controller 156 then performs the state transition 730, so that the queue 152 transitions to the control state 436. The states and state transitions then repeat, so long as the distribution controller 156 continues to receive commands that request the transfer of program content and/or so long as more logical groups of frames remain to be transferred to clients.
The processing that the distribution controller 156 performs while the queue 152 is in the control state 436, which causes the queue 152 to perform the state transition 720 from the control state 436 to the ingestion state 536, is further described below with reference to
At any one time, the queues that are within a multiple of three queues are in different states. Using the example of
The time period between consecutive time reference points represents the amount of elapsed time that each of the queues 152 stays in a given state (e.g., the amount of time between the state transitions 720, 725, and 730). Logical groups are the units of the programs 150 that the content server 100 transmits to any one client during any one time period (during the time period or amount of time between time reference points). Logical groups are also the units of the programs that the content server operates on during one particular state of the queue 152. A logical group is also the unit of the content that a client device may play during one time period between two consecutive time reference points.
As illustrated in
Between the time reference points 810-4 and 810-5, the queue A 152-1 is in the control state 436, and the distribution controller 156 processes the logical group 310-3 for the target client A 135. Between the time reference points 810-5 and 810-6, the queue A 152-1 is in the ingestion state 536, and the distribution controller 156 processes the logical group 310-3 for the target client A 135. Between the time reference points 810-6 and 810-7, the queue A 152-1 is in the distribution state 636, and the distribution controller 156 processes the logical group 310-3 for the target client A 135.
Between the time reference points 810-2 and 810-3, the queue B 152-2 is in the control state 436, and the distribution controller 156 processes the logical group 310-1 for the target client A 135. Between the time reference points 810-3 and 810-4, the queue B 152-2 is in the ingestion state 536, and the distribution controller 156 processes the logical group 310-1 for the target client A 135. Between the time reference points 810-4 and 810-5, the queue B 152-2 is in the distribution state 636, and the distribution controller 156 processes logical group 310-1 for the target client A 135.
Between the time reference points 810-5 and 810-6, the queue B 152-2 is in the control state 436, and the distribution controller 156 processes the logical group 310-4 for the target client A 135. Between the time reference points 810-6 and 810-7, the queue B 152-2 is in the ingestion state 536, and the distribution controller 156 processes the logical group 310-4 for the target client A 135. Starting at the time of the time reference point 810-7, the queue B 152-2 is in the distribution state 636 and processes the logical group 310-4 for the target client A 135.
Between the time reference points 810-3 and 810-4, the queue C 152-3 is in the control state 436 and processes the logical group 310-2 for the target client A 135. Between the time reference points 810-4 and 810-5, the queue C 152-3 is in the ingestion state 536 and processes the logical group 310-2 for the target client A 135. Between the time reference points 810-5 and 810-6, the queue C 152-3 is in the distribution state 636 and processes the logical group 310-2 for the target client A 135.
Thus, the target client A 135 receives the logical group 310-0 between the time reference points 810-3 and 810-4 (during the distribution state 636 of the queue A 152-1) and plays the logical group 310-1 between the time reference points 810-4 and 810-5, which is the next time period following the time period of the distribution state 636 of the queue (or later if the target client A 135 is buffering its received content). Likewise, the target client A 135 receives the logical group 310-1 between the time reference points 810-4 and 810-5 (during the distribution state 636 of the queue B 152-2) and plays the logical group 310-1 between the time reference points 810-5 and 810-6, which is the next time period following the time period of the distribution state 636 of the queue (or later). Likewise, the target client A 135 receives the logical group 310-2 between the time reference points 810-5 and 810-6 (during the distribution state 636 of the queue C 152-3) and plays the logical group 310-2 between the time reference points 810-6 and 810-7, which is the next time period following the time period of the distribution state 636 of the queue (or later). Likewise, the target client A 135 receives the logical group 310-3 between the time reference points 810-6 and 810-7 (during the distribution state 636 of the queue A 152-1) and plays the logical group 310-3 starting at the time of the time reference point 810-7, which is the start of the next time period following the time period of the distribution state 636 of the queue (or later). Thus, the client receives its program content from multiple queues during different time periods.
As illustrated in
Control then continues to block 915 where the client state controller 154 sends the state of the client to the distribution controller 156, including an identification of the program content requested by the client, if the command includes such an identification. The distribution controller 156 processes commands from the client when the queue 152 associated with the client reaches the control state 436, as further described below.
Control then continues to block 920 where the distribution controller 156 receives the state of the client, determines the logical groups of frames that satisfy the command, determines the queues to process the logical groups and sends the command with the identifiers of the logical groups to the queues. In an embodiment, the distribution controller may send commands for the logical groups to a single queue 152. In another embodiment, the distribution controller 156 may spread the commands for a single target client for multiple logical groups of frames across multiple queues, as previously described above with reference to
Control then continues to block 999 where the logic of
Control then continues to block 1005 where the distribution controller 156 locks the queue 152, which gives this instance of the distribution controller (which is processing the control state 436 of this queue 152) exclusive access to this queue 152 and prevents other instances or threads of the distribution controller 156 from accessing this queue 152. Control then continues to block 1010 where the distribution controller 156 removes all queue records from this queue 152 that represent frames in a logical group whose transfer to its target client 432 was completed (during a previous distribution state 636) and removes all queue records from this queue 152 for those clients 432 that are in not in a play state.
Control then continues to block 1015 where, for every client that is in a play state, the distribution controller 156 adds records to this queue 152 for every frame in the respective next logical group that is to be sent to those clients. Control then continues to block 1020 where, for every client that is in a play state and that sent a command since the previous control state 436, the distribution controller 156 sets the content identifier 426, the frame identifier 428, and pointer 430 in the queue records to identify frames of the first logical group (in a play order) in the program specified by the command.
Control then continues to block 1025 where, for every queue record for which a previous logical group (previous in the play order of the program) was sent during the previous distribution state 636 of this queue 152 and for which a command has not been received since the previous control state 436, the distribution controller 156 sets the content identifier 426, the frame identifier 428, and the pointer 430 to identify the frames in the next logical group (the next logical group that is subsequent in the play order to the logical group that was sent to the client during the previous distribution state) of the program requested by the client 432 in the queue record.
Control then continues to block 1030 where the distribution controller 156 waits for the current logical group time period to expire. Control then continues to block 1035 where the distribution controller 156 changes the state 434 in the queue 152 to indicate the ingestion state 536 and releases the lock on the queue 152, which performs the state transition 720 (
Control then continues to block 1040 where the distribution controller 156 processes the ingestion state 536, as further described below with reference to
Control then continues to block 1105 where the distribution controller 156 locks the queue 152, which gives the distribution controller 156 exclusive access to the queue 152 and prevents other programs, processes, or threads from accessing the queue 152. Control then continues to block 1110 where, for every queue record with a pointer 430 that points to secondary storage (remote or local), the distribution controller 156 copies the contents of the frame (the pointer 430 points at the frame content) from local secondary storage (e.g., a local disk drive 125, 126, or 127) or from remote secondary storage (e.g., the remote disk drive 134 attached to the content server 100 via the network 130) to the memory 102.
Control then continues to block 1115 where, for every queue record with a pointer 430 that points to a network address, the distribution controller 156 copies the contents of the frame from the network (e.g., computer systems within the network) to the memory 102. Control then continues to block 1120 where the distribution controller 156 sets the pointer 430 in the queue records to point to (to contain the address of) the content in the memory 102.
Control then continues to block 1125 where the distribution controller 156 waits for the logical group time period to expire. Control then continues to block 1130 where the distribution controller 156 changes the state in the queue 152 to indicate the distribution state 636 and releases the lock on the queue 152, which performs the state transition 725. Control then continues to block 1135 where the distribution controller 156 processes the distribution state 636 of the queue 152, as further described below with reference to
Control then continues to block 1199 where the logic of
Control begins at block 1200. Control then continues to block 1205 where the distribution controller 156 locks the queue 152. Control then continues to block 1210 where the distribution controller 156 waits for the availability of the network adapter 114. Control then continues to block 1215 where the distribution controller 156 transfers the entire contents of the frame specified by each queue record in the queue 152 to the network adapter 114. Control then continues to block 1220 where the network adapter 114 transfers the contents of the frame specified by each queue record from the memory 102 to the target clients specified by each queue record, which results in transferring all of the frames in the respective logical groups to the respective clients during one logical group time period. Control then continues to block 1225 where the distribution controller 156 waits until the logical group time period expires. Control then continues to block 1230 where the distribution controller 156 changes the state of the queue 152 to the control state 436 and releases the lock on the queue 152, which performs the state transition 730. Control then continues to block 1235 where the distribution controller 156 processes the control state 436, as previously described above with reference to
Thus, by cycling through the states 436, 536, and 636 at the time reference points 810-1, 810-2, 810-3, 810-4, 810-5, 810-6, and 810-7, an embodiment of the invention transmits the frame content in logical groups, which eliminates the need for complex session handling between the clients 135 and 136 and the server 100. Further, the transmission of logical groups of frame content within logical group time periods provides predictable network bandwidth consumption, such that an embodiment of the invention drives the network load higher than does conventional asynchronous IP network traffic.
The previous detailed description of exemplary embodiments of the invention, reference was made to the accompanying drawings (where like numbers represent like elements), which form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments were described in sufficient detail to enable those skilled in the art to practice the invention, but other embodiments may be utilized and logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention. In the previous description, numerous specific details were set forth to provide a thorough understanding of embodiments of the invention. But, the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the invention.
Different instances of the word “embodiment” as used within this specification do not necessarily refer to the same embodiment, but they may. Any data and data structures illustrated or described herein are examples only, and in other embodiments, different amounts of data, types of data, fields, numbers and types of fields, field names, numbers and types of rows, records, entries, or organizations of data may be used. In addition, any data may be combined with logic, so that a separate data structure is not necessary. The previous detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The present application is related to commonly-assigned patent application Ser. No. ______, Attorney Docket Number ROC920060366US 1, to Glenn D. Batalden, et al., filed on even date herewith, entitled “SENDING CONTENT FROM MULTIPLE CONTENT SERVERS TO CLIENTS AT TIME REFERENCE POINTS,” which is herein incorporated by reference. The present application is also related to commonly-assigned patent application Ser. No. ______, Attorney Docket Number ROC920060485US1, to Glenn D. Batalden, et al., filed on even date herewith, entitled “DETERMINING A TRANSMISSION ORDER FOR FRAMES BASED ON BIT REVERSALS OF SEQUENCE NUMBERS,” which is herein incorporated by reference.