The present application claims priority to the Chinese patent application identified as 201210417377.8, filed on Oct. 26, 2012, and entitled “Interface for Asynchronous Virtual Container Channels and High Data Rate Port,” the disclosure of which is incorporated by reference herein in its entirety.
The field relates generally to network-based communication systems, and more particularly to techniques for providing an interface between multiple asynchronous virtual container channels and a single high data rate port in a circuit emulation over packet environment in such communication systems.
Conventional network-based communication systems include systems configured to operate in accordance with well-known synchronous transport standards, such as the synchronous optical network (SONET) and synchronous digital hierarchy (SDH) standards.
The SONET standard was developed by the Exchange Carriers Standards Association (ECSA) for the American National Standards Institute (ANSI), and is described in the document ANSI T1.105-1988, entitled “American National Standard for Telecommunications—Digital Hierarchy Optical Interface Rates and Formats Specification” (September 1988), which is incorporated by reference herein. SDH is a corresponding standard developed by the International Telecommunication Union (ITU), set forth in ITU standards documents G.707 and G.708, which are incorporated by reference herein.
The basic unit of transmission in the SONET standard is referred to as synchronous transport signal level-1 (STS1). It has a data rate of 51.84 Megabits per second (Mbps). The corresponding unit in the SDH standard is referred to as synchronous transport module level-0 (STM0). Synchronous transport signals at higher levels comprise multiple STS1 or STM0 signals. For example, an intermediate unit of transmission in the SONET standard is referred to as synchronous transport signal level-3 (STS3). It has a data rate of 155.52 Mbps. The corresponding unit in the SDH standard is referred to as STM1.
A given STS3 or STM1 signal is organized in frames having a duration of 125 microseconds (μsec), each of which may be viewed as comprising nine rows by 270 columns of bytes, for a total frame capacity of 2,430 bytes per frame. The first nine bytes of each row comprise transport overhead (TOH), while the remaining 261 bytes of each row are referred to as a synchronous payload envelope (SPE). Synchronous transport via SONET or SDH generally involves a hierarchical arrangement in which an end-to-end path may comprise multiple lines with each line comprising multiple sections. The TOH includes section overhead (SOH), pointer information, and line overhead (LOH). The SPE includes path overhead (POH). Additional details regarding signal and frame formats can be found in the above-cited standards documents.
In conventional SONET or SDH network-based communication systems, synchronous transport signals like STS3 or STM1 are mapped to or from corresponding higher-rate optical signals such as a SONET OC-12 signal or an SDH STM4 signal. An OC-12 optical signal carries four STS3 signals, and thus has a data rate of 622.08 Mbps. The SDH counterpart to the OC-12 signal is the STM4 signal, which carries four STM1 signals, and thus also has a data rate of 622.08 Mbps. The mapping of these and other synchronous transport signals to or from higher-rate optical signals generally occurs in a physical layer device commonly referred to as a mapper, which may be used to implement an add-drop multiplexer (ADM) or other node of a SONET or SDH communication system.
Such a mapper typically interacts with a link layer processor. A link layer processor is one example of what is more generally referred to herein as a link layer device, where the term “link layer” generally denotes a switching function layer. Another example of a link layer device is a field programmable gate array (FPGA). These and other link layer devices can be used to implement processing associated with various packet-based protocols, such as Internet Protocol (IP) and Asynchronous Transfer Mode (ATM), as well as other protocols, such as Fiber Distributed Data Interface (FDDI). A given mapper or link layer device is often implemented in the form of an integrated circuit.
In many communication system applications, it is necessary to carry circuit-switched traffic such as T1/E1 traffic over a packet network such as an IP network or an ATM network. For example, it is known that T1/E1 traffic from a SONET/SDH network or other circuit-switched network may be carried using virtual containers (VCs). The SONET/SDH mapper maps/de-maps the SONET/SDH transport signals (frames) to/from VCs. When it is desired or necessary to carry VCs over an IP network or other packet network, the VCs are packed into packets of the IP network or other packet network. In the opposite transmission direction, VCs from the packets of the IP network or other packet network are unpacked for transmission in the SONET/SDH network. The link layer processor packs/unpacks VCs into/from the packets.
The packing/unpacking of VCs or other time-division multiplexed (TDM) data to/from IP packets or other types of packets may be performed in accordance with a circuit emulation protocol, such as the CEP protocol described in IETF RFC 4842, “Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) Circuit Emulation over Packet (CEP),” April 2007, which is incorporated by reference herein.
While data rates of transmission channels carrying virtual container data (virtual container channels) associated with a physical layer device, such as a SONET/SDH mapper, are typically asynchronous, a link layer device, such as a link layer processor, typically does not have the ability to input/output multiple asynchronous virtual container channels. Embodiments of the invention provide an interface between multiple asynchronous virtual container channels of a physical layer device and a single high data rate port of a link layer device.
In one embodiment, an apparatus comprises data rate justification circuitry adapted to control one or more communications between a physical layer device and a link layer device. In a first direction of communication, the data rate justification circuitry is configured to receive first virtual container data from the physical layer device over two or more asynchronous virtual container channels, and to synchronize the first virtual container data and aggregate the first virtual container data for transmission to the link layer device over a high data rate port. In a second direction of communication, the data rate justification circuitry is configured to receive second virtual container data from the link layer device over the high data rate port, and to decode data rate information associated with the second virtual container data and separate the second virtual container data for transmission to the physical layer device over the two or more asynchronous virtual container channels.
Other embodiments may implement other types of data rate justification and virtual container data aggregation/separation techniques to support interface functionality between a physical layer device and a link layer device.
Embodiments of the invention will be illustrated herein in conjunction with an exemplary network-based communication system which includes a physical layer device, a link layer device and other elements configured in a particular manner. It should be understood, however, that the disclosed techniques are more generally applicable to any communication system application in which it is desirable to provide data rate justification functionality to support circuit emulation over packet protocols. Thus, while reference will be made below to SONET/SDH networks and IP networks, it is to be understood that the disclosed techniques may be used in other circuit-switched networks and other packet networks.
As mentioned above, at the boundary of a SONET/SDH network and a packet network, SONET/SDH frames are de-mapped into VCs, and then these VCs are packed into packets and transmitted on the packet network. In the opposite transmission direction, VCs are unpacked from the packet network and then mapped into SONET/SDH frames to be transmitted on the SONET/SDH network.
It is realized, however, that synchronous transport signals (STS-n/STM-n) can be de-mapped into multiple VC channels, and that the data rates of these VC channels are asynchronous. For example, the STM1 signal is the most used SDH signal. One STM1 signal can be de-mapped into one VC4 channel, three VC3 channels, 63 VC12 channels, or 84 VC11 channels. Although most conventional link layer processors are software programmable and have the flexibility to upgrade to support the VC data format, such link layer processors do not have sufficient hardware interfaces to receive/transmit multiple VC channels separately.
Accordingly, embodiments of the invention provide methods and apparatus that address these and other issues by providing an interface between multiple asynchronous VC channels of a mapper and a single high data rate port of a link layer processor. It is to be understood that by the phrase “asynchronous VC channels,” it is meant that a given VC channel can be asynchronous with one or more other given VC channels and/or can be asynchronous with the single high data rate port. For example, one embodiment of the invention includes an interface that adds extra frame headers on VC frames to denote data rate justification, and then aggregates multiple asynchronous VC channels on a single high data rate port. Most conventional link layer processors have such a single high data rate port, e.g., C4 container port. As such, an improved CEP solution is provided for a conventional link layer processor architecture.
Although shown in the figure as being separate from the networks 104 and 106, the node 102 may be viewed as being part of one of the networks 104 or 106. For example, the node 102 may comprise an edge node of network 104 or network 106. Alternatively, the node may represent a standalone router, switch, network element or other communication device arranged between nodes of the networks 104 and 106.
The node 102 of system 100 comprises data rate justification circuitry 110 coupled between a mapper 112 and a link layer processor 114. Data rate justification circuitry 110 functions as an interface, as will be explained further herein, between mapper 112 and link layer processor 114. The node 102 also includes a host processor 116 that is used to configure and control one or more of data rate justification circuitry 110, mapper 112 and link layer processor 114. Portions of the host processor functionality may be incorporated into one or more of elements 110, 112 or 114 in other embodiments. Also, although the data rate justification circuitry 110 is shown in
The data rate justification circuitry 110, mapper 112, link layer processor 114, and host processor 116 in this embodiment may be installed on a line card or other circuit structure of the node 102. Each of the elements 110, 112, 114 and 116 may be implemented as a separate integrated circuit, or one or more of the elements may be combined into a single integrated circuit. Various elements of the system 100 may therefore be implemented, by way of example and without limitation, utilizing a microprocessor, an FPGA, an application-specific integrated circuit (ASIC), a system-on-chip (SOC) or other type of data processing device, as well as portions or combinations of these and other devices. One or more other nodes of the system 100 in one or both of networks 104 and 106 may each be implemented in a manner similar to that shown for node 102 in
The data rate justification circuitry 110 controls certain communications between the mapper 112 and the link layer processor 114, in order to enable the mapper 112 to input/output VCs over multiple independent (asynchronous) VC channels 118 and the link layer processor 114 to input/output the corresponding VCs over a single high data rate port 120. The mapper 112 and link layer processor 114 are examples of what are more generally referred to herein as physical layer devices and link layer devices, respectively. The term “physical layer device” as used herein is intended to be construed broadly so as to encompass any device which provides an interface between a link layer device and a physical transmission medium of a network-based system. The term “link layer device” is also intended to be construed broadly, and should be understood to encompass any type of processor which performs processing operations associated with a link layer of a network-based system.
The mapper 112 and link layer processor 114 may include functionality of a conventional type. Such functionality, being well known to those skilled in the art, will not be described in detail herein, but may include functionality associated with known mappers, such as the LSI Hypermapper™, Ultramapper™ and Supermapper™ devices, and known link layer devices, such as the LSI Link Layer Processor. These LSI devices are commercially available from LSI Corporation of Milpitas, Calif., U.S.A. However, in accordance with embodiments of the invention, it is also to be understood that mapper 112 and link layer processor 114 are adapted to implement one or more techniques described herein.
The node 102 may also include other processing devices not explicitly shown in the figure. For example, the node may comprise a conventional network processor such as an LSI Advanced PayloadPlus® network processor in the APP300, APP500 or APP650 product family, also commercially available from LSI Corporation.
Although only single instances of the data rate justification circuitry 110, mapper 112 and link layer processor 114 are shown in the
The data rate justification circuitry 110 is coupled between the mapper 112 and the link layer processor 114 and includes an ingress module 122 and an egress module 124. The ingress module 122 supports a direction of communication through node 102 from the SONET/SDH network 104 to the packet network 106 (also referred to as a drop path). The egress module 124 supports a direction of communication through node 102 from the packet network 106 to the SONET/SDH network 104 (also referred to as an insert path). The data rate justification circuitry 110 operates in conjunction with the mapper 112 to add extra frame headers on VC frames to denote data rate justification, and to aggregate multiple asynchronous VC channels 118 on a single high data rate port 120 associated with link layer processor 114. The ability to aggregate multiple asynchronous VC channels on a single high data rate port improves operation of the CEP protocol or other circuit emulation over packet protocols used to pack/unpack VCs to/from packets.
More particularly, in the ingress direction, ingress module 122 receives data from mapper 112 over multiple independent VC channels 118, and synchronizes these asynchronous channels to a data rate of the high data rate port 120. Ingress module 122 also reserves space and fills certain fields in a CEP packet header, and adds a justification super frame header, to be explained in detail below, for each CEP packet to denote the data rate justification. Then, data from two or more of the multiple VC channels is packed together and transmitted to link layer processor 114 over the high data rate port 120.
In the egress direction, egress module 124 requests (or otherwise receives) data from the link layer processor 114 through the high data rate port 120. The link layer processor 114 is adapted to add the extra justification super frame header on each CEP packet to denote the data rate justification for each VC channel. Then, egress module 124 decodes both the extra justification super framer header and CEP packet header received over the high data rate port 120 from link layer processor 114, adapts the data rate justification, and sends VCs on corresponding ones of the multiple VC channels 118 to SDH/SONET mapper 112 with proper data rate and format.
The operation of the data rate justification circuitry 110 will now be described in greater detail with reference to
As described in the above-referenced IETF RFC 4842, a packet that is generated in accordance with the CEP protocol has a CEP frame format that includes a CEP header and a CEP payload. The CEP payload includes the SONET/SDH VC data to be transmitted over packet network 106. Thus, data rate justification circuitry 110 interfaces the SONET/SDH mapper 112 which operates in a VC frame format with link layer processor 114 which operates in a CEP frame format. The format of a CEP header is shown in
In CEP header format 200 of
In particular, data rate justification circuitry 110 utilizes the Structure Pointer field 216 of the CEP header 200 of
We now describe how data rate justification circuitry 110 functions as an interface between the multiple asynchronous VC channels 118 and the single high data rate port 120, enabling link layer processor 114 to receive VC frames from multiple asynchronous VC channels on a single high data rate port. Embodiments utilize a frame formatting technique that generates a justification super frame (JSF). As will be understood from the description below, both data rate justification circuitry 110 and link layer processor 114 are able to generate JSFs.
As is known, for a single VC channel, VC data is packed into a VC super frame. In a VC12 application, the frame is 500 μS. In accordance with the CEP protocol, the VC super frame is then packed into a CEP packet as CEP payload, and an 8-byte CEP header is added to the CEP payload to form a CEP packet. The CEP header has a format as described above in the context of
The first byte field of JSF header 302, labeled SF PTR 304, is a super frame pointer. SF PTR 304 points to the position of the first byte of the 8-byte-CEP header 312 in JSF 300. If there are multiple CEP headers in the same JSF, SF PTR 304 points to the CEP header closest thereto. The pointer value of SF PTR 304 varies from 1 to 148; a value of 0 denotes that there is no CEP header in the current JSF.
The second byte field of JSF header 302, labeled Justification Ind. 306, includes two bits to indicate the type of data rate justification that is to be implemented. The data rate of the subject data is justified by using one or both of positive justification byte field 316 and negative justification byte field 318. In one embodiment, the two bits of Justification Ind. 306 are designated as follows:
00: Positive justification byte field 316 is used, negative justification byte field 318 is not used;
01: Neither positive justification byte field 316 nor negative justification byte field 318 are used; and
10: Both positive justification byte field 316 and negative justification byte field 318 are used.
Accordingly, the last two bytes fields 316 and 318 of JSF 300 implement the data rate justification. The usage of these bytes is dictated by Justification Ind. 306. When a justification byte is indicated as being needed, a byte from the subject CEP packet is inserted in one or both of the justification byte fields 316 and 318. When a justification byte is not needed, the justification byte fields are reserved, and no useful data is placed in the fields. The device that receives the JSF is configured to discard any data in those fields if Justification Ind. 306 indicates that data rate justification is not needed.
It is to be understood that the step of adding bytes to the JSF to thereby justify the data rate of the subject data is used in order to synchronize the VC data that comes from the same and/or separate asynchronous (independent) VC channels 118 with the data rate of the high data rate port 120. Examples of scenarios involving no data rate justification, positive data rate justification, and negative data rate justification are given below.
Note that extra Reserved Bytes 308 are appended at the end of JSF header 302, and the use of these reserved bytes is left to the discretion of the user. For the example application, JSF header 302 includes one reserved byte, thus giving JSF 300 a total length of 152 bytes.
It is to be appreciated that the data rate of each VC channel (each of the multiple VC channels 118 in
If the data rate of a subject VC channel (one of channels 118 in
If the data rate of a subject VC channel (one of channels 118 in
If the data rate of a subject VC channel (one of channels 118 in
In order to save buffer size consumed by the frame formatting technique described herein, the 500 μs JSF is further divided into four even subframes, each of which are transmitted in 125 μs. In the example VC12 application, the size of a sub frame is 38 bytes.
Then, subframes from all VC channels that contributed VC data are packed together to form an aggregate frame. The aggregate frame is transmitted on the high data rate port 120 in 125 μs . Therefore, a JSF for each VC channel is transmitted in four successive aggregate frames.
In alternate embodiments, a JSF may be divided into a number of subframes other than four (e.g., more generally, D) such that a JSF for each VC channel is transmitted in D successive aggregate frames.
An embodiment of an aggregate frame format is shown as
00: denotes the first 125 μs aggregate frame in the 500 μs period; the subframes, transmitted on current aggregate frame, contain the JSF headers;
01: denotes the second 125 μs aggregate frame in the 500 μs period;
10: denotes the third 125 μs aggregate frame in the 500 μs period; and
11: denotes the fourth 125 μs aggregate frame in the 500 μs period.
Aggregate frame 700 then includes a set of bit interleaved parity (BIP) bytes 706. Each byte provides a parity check for VC channels belonging to the same STS1/STM0 channel. The number of the BIP bytes is dependent on the STM-n application. In one embodiment, the number is 3× bytes. The number of BIP bytes may, however, vary in other embodiments.
According to the SONET/SDH protocol, one STM0 channel may contain 1 VC3 channel or 21 VC12 channels or 28 VC11 channels. For the example application, aggregate frame 700 includes three BIP bytes, each byte checks for 21 VC12 channels.
Next, the aggregate frame 700 includes 63 subframes 708-1, . . . , 708-63. It is understood that these subframes are from different VC channels of the multiple asynchronous VC channels 118. The subframes 708-1, . . . , 708-63 are transmitted in the order of their channel number.
The end of aggregate frame 700 is filled with stuff bytes 710 to pad the data rate, if necessary, to match the high data rate port 120. For the example application, under a 155.52 Megahertz (MHz) clock, the C4 interface port of a link layer processor can transmit 2430 byte per 125 μs, with the last 26 bytes being filled with stuffed bytes.
Recall that in the ingress direction, there are multiple asynchronous VC channels collectively referred to as VC channels 118. In
An embodiment of a VC adaptor 802 is illustrated in
A VC channel contains three signals VC_CLK, VC_DATA and VC_SYNC. The VC_CLK denotes the VC data rate, VC_Data conveys the VC payload, and VC_SYNC denotes the start of a VC frame.
The VC data is stored into data buffer 902. The VC frame start position is recorded in start position recorder 904. CEP PKT formatter 906 then reads the VC payload data from data buffer 902, adds the CEP header (200 in
That is, the output of VC adaptor 802 operates at the nominal data rate, as explained above. When DATA BUF 902 is substantially full (i.e., VC data rate is higher than the nominal data rate), the justification formatter 908 performs negative data rate justification, also as explained above. That is, with reference back to JSF 300 in
When DATA BUF 902 is substantially empty (i.e., VC data rate is lower than the nominal data rate), the justification formatter 908 performs positive data rate justification, as explained above. That is, the formatter 908 sets Justification Ind. bit 306 to 01, uses neither of the justification byte fields 316 or 318 to send CEP packet data, and decreases SF PTR 304 of the next JSF by one. In this way, the VC adaptor 802 sends out one less byte in a JSF period.
Recall that in the egress direction, the input high data rate streams received over the high data rate port 120 are de-multiplexed into multiple asynchronous VC channels 118. As shown in
An embodiment of a VC generator 1004 is illustrated in
VC generator 1004 receives the de-multiplexed data stream from the De-MUX 1002. At the VC channel port, the VC generator outputs VC_CLK, VC_DATA and VC_SYNC, which are described above.
The data stream for De-MUX 1002 is stored in data buffer 1104 and input in justification decoder 1102. In justification decoder 1102, the justification header is decoded and removed. By decoding the SF PTR field (304 in
In CEP header decoder 1108, the CEP header is parsed, and the start position of the VC frame is found by decoding the structure pointer field (216 in
In this manner, embodiments provide the ability for a link layer processor to drop/insert multiple asynchronous VC channels through a single high data rate port, which makes it possible to upgrade current link layer processor architectures to operate more efficiently in a CEP application environment.
At least a portion of the circuitry and methodologies described herein can be implemented in one or more integrated circuits. In forming integrated circuits, die are typically fabricated in a repeated pattern on a surface of a semiconductor wafer. Each of the die can include a device described herein, and can include other structures or circuits. Individual die are cut or diced from the wafer, then packaged as integrated circuits. One ordinarily skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of this invention.
It is to be appreciated that the particular circuitry arrangements shown in FIGS. 1 and 8-12, and the frame formats of
It should be noted that the portions of the data rate justification circuitry 110, and possibly other components of the node 102, may be implemented at least in part in the form of one or more software programs running on a processor. A memory associated with mapper 112, link layer processor 114, or host processor 116 may be used to store executable program code of this type. Such a memory is an example of what is more generally referred to herein as a “computer program product” or a “computer-readable storage medium” having executable computer program code embodied therein. The computer program code when executed in a mapper, link layer processor, host processor, or other communication device processor causes the device to perform one or more operations associated with data rate justification circuitry 110. Other examples of computer program products in embodiments of the invention may include, for example, optical or magnetic disks.
Although embodiments of the invention have been described herein with reference to the accompanying drawings, it is to be understood that embodiments of the invention are not limited to the described embodiments, and that various changes and modifications may be made by one skilled in the art resulting in other embodiments of the invention within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
201210417377.8 | Oct 2012 | CN | national |