The present disclosure relates to transporting data, such as video data, over a packet switched network.
Broadcasters, such as television broadcasters or other content providers, capture audiovisual content and then pass that content to, e.g., a production studio for distribution to end users. As is becoming more common, the audiovisual content is captured digitally, and is then passed to the production studio in a digital form. While ultimate end users may be provided with a compressed version of the digital audiovisual content for, e.g., their televisions or computer monitors, production engineers (and perhaps others) often desire a full, original, non-compressed version of the audiovisual data stream.
When a venue at which the audiovisual content is captured is distant from the production studio, the venue and production studio must be connected to each other via an electronic network to transfer the audiovisual content. The electronic network infrastructure may be public and is often some sort of time division multiplex (TDM) network, based on, e.g., Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH) technology. Such network connectivity provides a “strong” link between two endpoints (and thus between the venue at which the audiovisual content is captured and the production studio) such that the full, original, audiovisual data stream can be transmitted without concern regarding timing and data loss. However, it is becoming increasingly desirable to employ packet switched networks (PSNs) for transmitting captured digital audiovisual data streams between endpoints. However, PSNs can present challenges for transmitting certain types of data streams.
Embodiments described herein enable the convergence of a constant bit rate video distribution network and a packet switched network such as an Ethernet network. In one embodiment, a method includes, at an ingress node, receiving a constant bit rate data stream, segmenting the constant bit rate data stream into fixed size blocks of data, generating a time stamp indicative of a system reference clock, the time stamp being in reference to a clock rate of the constant bit rate data stream, encapsulating, in an electronic communication protocol frame, a predetermined number of fixed size blocks of data along with (i) a control word indicative of, at least, a relative sequence of the predetermined number of fixed blocks of data in the constant bit rate stream and (ii) the time stamp, and transmitting the electronic communication protocol frame to a packet switched network.
At an egress node, a method includes receiving, via the packet switched network, the electronic communication protocol frame, generating a slave clock that is controlled at least in part based on the time stamp, clocking out from memory the constant bit rate data stream data using the slave clock, and processing selected fixed blocks of constant bit rate data stream data using information from the control word.
Endpoint 120 is shown having a client clock 125. The frequency or rate of clock 125 is the frequency at which a data stream 140, such as a constant bit rate (CBR) video stream, is clocked out of video equipment 120. As will be explained in detail, video equipment at endpoint 130 will ultimately receive the entire, uncompressed, version of video stream 140, even though the video stream will have transited a packet switched network 100.
As further shown, a system reference clock 150 is available to an ingress node 500 and an egress node 600 of the packet switched network 100. These nodes may be integral with respective endpoints 120, 130, or physically separated from those endpoints. The purpose of ingress node 500 is to receive CBR data stream 140 and to appropriately packetize the same for transmission via the packet switched network 100. The purpose of egress node 600 is to receive the output of ingress node 500 (via the packet switched network 100), and convert the packetized data back into a CBR data stream 140 for delivery to the video equipment within network endpoint 130.
Ingress node 500 and egress node 600 each include a processor 510, 610 and associated memory 520, 620. The memory 520, 620 may also comprise segmentation and timing logic 550, the function of which will be described more fully below. It is noted, preliminarily, that segmentation and timing logic 550 as well as other functionality of the ingress node 500 and egress node 600 may be implemented as one or more hardware components, one or more software components, or combinations thereof. More specifically, the processors 510, 610 used in conjunction with segmentation and timing logic 550 may be comprised of a programmable processor (microprocessor or microcontroller) or a fixed-logic processor. In the case of a programmable processor, any associated memory (e.g., 520, 620) may be of any type of tangible processor readable memory (e.g., random access, read-only, etc.) that is encoded with or stores instructions. Alternatively, the processors 510, 610 may be comprised of a fixed-logic processing device, such as an application specific integrated circuit (ASIC) or digital signal processor that is configured with firmware comprised of instructions or logic that cause the processor to perform the functions described herein. Thus, the segmentation and timing logic 550 may take any of a variety of forms, so as to be encoded in one or more tangible media for execution, such as with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and any processor may be a programmable processor, programmable digital logic (e.g., field programmable gate array) or an ASIC that comprises fixed digital logic, or a combination thereof. In general, any process logic described herein may be embodied in a processor or computer readable medium that is encoded with instructions for execution by a processor that, when executed by the processor, are operable to cause the processor to perform the functions described herein.
Referring again to
In accordance with a particular implementation, the video data stream 140 is segmented, chopped up, or otherwise grouped into individual data blocks 330(1) . . . 330(n) having a fixed size. This processing is performed by segmentation and timing logic 550 in conjunction with processor 510 and TDM to packet module 530. As shown in
Referring still to
In the implementation shown in
As mentioned, there are two possible timing scenarios depending on the availability of a common reference clock 150 for the two endpoints. However, the differential timing time stamp mechanism is used in both scenarios, and is explained next with reference to
When an Ethernet packet 300 is received, the payload 401 including the DF time stamp 470 and control word 335 are stored in memory 620 of which queue 630 (see, e.g.,
The egress node 600 knows how the DF time stamp value is determined (i.e., in this example: the number of system reference clock cycles for every n=256 client clock cycles), and with this knowledge the egress node 600 can control slave clock 660 based on the DF time stamp 470 (received from ingress node 500) and the system reference clock 150 (which is common for both nodes).
More specifically, the system reference clock 150 is the same for both nodes, so if during the same number of slave clock 660 cycles (counted by counter A 615), the same number of system reference clock 150 cycles are counted by counter B 625 (that is, the value that is stored as the DF time stamp), this means that the slave 660 frequency equals the client clock 125 frequency. If there is an inequality between the value of the DF time stamp 470 received with an Ethernet frame 300 and the value counted by counter B 625 and latched by latch counter 680, then the frequency of the slave clock 660 is adjusted.
Thus, referring to Table 1, if at iteration #1, where the number of system reference clock 150 cycles is greater than the DF time stamp value, then the slave clock 660 frequency is decreased. Similarly, where the number of system reference clock 150 cycles is less than the DF time stamp value at iteration #4, then the slave clock 660 frequency should be increased. In sum, at every iteration, i.e., after each receipt of an Ethernet frame 300 with a DF time stamp 470, a determination may be made as to whether the slave clock 660 properly matches the client clock 125 so that the CBR data stream 140 that has been encoded within the payload of the Ethernet frame can be accurately clocked out of packet to TDM module 670 (which might also be part of memory 620). Control of slave clock 660 may be implemented by adder 640.
In an alternative embodiment shown in
As mentioned, the system reference clock 150 may not be available at the egress node 600. Thus, in a second embodiment, system reference clock information is fed through the network using the zero bit 331 of each fixed size data block 300.
More specifically, and now with reference to
The egress node 600 employs a counter (not shown, but which may be implemented within, e.g., adder 640) that averages zero bit values (e.g., the counter adds 1 if the zero bit value is 1, and subtracts 1 if the zero bit is 0). At every “t” clock cycles of slave clock 660, the value of the counter is evaluated to determine if the slave clock 660 is synchronous with client clock 125, where the accumulated average would be zero when synchronous. In the case where the difference is non-zero, a correction is applied to the regenerated system reference clock. The correction may be applied by adjusting the frequency of a voltage controlled oscillator (VCO) or the correction could be realized in the digital domain. By maintaining the average of the accumulated zero-bit values at or close to zero, a high quality reference clock can be synthesized such that the CBR data stream 140 can be clocked out of packet to TDM module 670 at the appropriate rate, namely the rate that matches the rate of the client clock 125. Thus, in sum, in this second embodiment, the synchronization of client clock 125 and slave clock 660 is effected in two steps including: first regenerating the system reference clock from the zero bit information from each fixed size block and then, second, synchronizing the slave clock 660 and the client clock 125 using the regenerated system reference clock.
As previously explained, forward error correction may be employed to better handle errors. Thus, even where a link, such as an optical link in packet switched network 100, might generate bit errors, the CBR data stream 140 that is encapsulated therein may nevertheless be transported error free due to the error correction capabilities of FEC.
In any event, in the case of possible errors even after FEC correction, ingress node 500 can indicate to egress node 600, via the control word 335, what type of corrective action to take and can also supply other helpful information to the far end egress node 600. With reference to
L—when set, indicates an invalid payload due to failure of attachment circuit.
R—when set, indicates a remote error or failure.
C—when set, indicates a client signal failure.
S—when set, indicates a client signal failure (i.e., loss of character synchronization).
M—when set, indicates a main (versus protected) path. This field is used to differentiate data coming from different paths (main and protect) and is useful to avoid sending duplicated packets. For protection, the same traffic can be sent on a working path and on a protected path. Working and protected paths can be differentiated by this specific bit. A receiver can, based on the value of the M field, immediately ascertain that a stream is being received via a working or protected path.
The “type” field provides still additional information to the egress node 600. The type field identifies, for example, the kind of video that is being transported, as well as instructions regarding error correction techniques. Specifically, selected combinations of bits can indicate to the egress node 600 to replace a current frame with a last sent frame (here the egress node 600 would maintain in its memory a 2-video frame buffer, wherein frame n is kept stored and repeated in case frame n+1 has errors). Similarly, a code may be supplied to indicate to replace just an “errored” packet with the same packet of the previous frame. The code may also indicate to deliver a packet with a known error therein. And finally, the code may indicate to replace an errored packet with fixed data.
The OS field comprises four bits and is used to support optical automatic protection switching (e.g., failover or handover) by transporting K1/K2-like protocol for protection switching. Protection schemes rely on Near End and Far End nodes exchanging messages. These messages are usually transported in band (inside the packet). SONET defines two bytes called K1 and K2 to carry this message. Other bits may be defined to transport similar or other messages that enable the management of the protection scheme.
The sequential number may be used to re-order received fixed size blocks since the packet switched network 100 may deliver the frames 300 in a different order than may have been transmitted. Finally, the cyclical redundancy code helps to ensure the integrity of the data of the control word.
Although the system and method are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the scope of the apparatus, system, and method and within the scope and range of equivalents of the claims. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the apparatus, system, and method, as set forth in the following.