This invention relates generally to encoding and decoding and, in particular, to staircase Forward Error Correction (FEC) coding.
FEC coding provides for correction of errors in communication signals. Higher coding gains provide for correction of more errors, and can thereby provide for more reliable communications and/or allow signals to be transmitted at lower power levels.
The Optical Transport Hierarchy (OTH), for example, is a transport technology for the Optical Transport Network (OTN) developed by the International Telecommunication Union (ITU). The main implementation of OTH is described in two recommendations by the Telecommunication Standardization section of the ITU (ITU-T), including:
Recommendation G.709/Y.1331, entitled “Interfaces for the Optical Transport Network (OTN)”, December 2009, with an Erratum 1 (May 2010), an Amendment 1 (July 2010), and a Corrigendum 1 (July 2010); and
Recommendation G.872, entitled “Architecture of optical transport networks”, November 2001, with an Amendment 1 (December 2003), a Correction 1 (January 2005), and an Amendment 2 (July 2010).
G.709 defines a number of layers in an OTN signal hierarchy. Client signals are encapsulated into Optical channel Payload Unit (OPUk) signals at one of k levels of the OTN signal hierarchy. An Optical channel Data Unit (ODUk) carries the OPUk and supports additional functions such as monitoring and protection switching. An Optical channel Transport Unit (OTUk) adds FEC coding. Optical Channel (OCh) signals in G.709 are in the optical domain, and result from converting OTUk signals from electrical form to optical form.
FEC coding as set out in G.709 provides for 6.2 dB coding gain. ITU-T Recommendation G.975.1, entitled “Forward error correction for high bit-rate DWDM submarine systems”, February 2004, proposes an enhanced FEC coding scheme with improved coding gain.
Further improvements in coding gain, without impractical additional processing resources, remain a challenge.
Examples of embodiments of the invention will now be described in greater detail with reference to the accompanying drawings.
A staircase code is a blockwise recursively encoded forward error correction scheme. It can be considered a generalization of the product code construction to a family of variable latency codes, wherein the granularity of the latency is directly related to the size of the “steps”, which are themselves connected in a product-like fashion to create the staircase construction.
In staircase encoding as disclosed herein, symbol blocks include data symbols and coding symbols. Data symbols in a stream of data symbols are mapped to a series of two-dimensional symbol blocks. The coding symbols could be computed across multiple symbol blocks in such a manner that concatenating a row of the matrix transpose of a preceding encoded symbol block with a corresponding row of a symbol block that is currently being encoded forms a valid codeword of a FEC component code. For example, when encoding a second symbol block in the series of symbol blocks, the coding symbols in the first row of the second symbol block are chosen so that the first row of the matrix transpose of the first symbol block, the data symbols of the first row of the second symbol block, and the coding symbols of the same row of the second block together form a valid codeword of the FEC component code.
Coding symbols could equivalently be computed by concatenating a column of the previous encoded symbol block with a corresponding column of the matrix transpose of the symbol block that is currently being encoded.
With this type of relationship between symbol blocks, in a staircase structure that includes alternating encoded symbol blocks and matrix transposes of encoded symbol blocks, each two-block wide row along a stair “tread” and each two-block high column along a stair “riser” forms a valid codeword of the FEC component code.
In some embodiments, a large frame of data can be processed in a staircase structure, and channel gain approaching the Shannon limit for a channel can be achieved. Low-latency, high-gain coding is possible. For 1.25 Mb to 2 Mb latency, for example, some embodiments might achieve a coding gain of 9.4 dB for a coding rate of 239/255, while maintaining a burst error correction capability and error floor which are consistent with other coding techniques that exhibit lower coding gains and/or higher latency.
During FEC encoding according to a staircase code, a FEC code in systematic form is first selected to serve as the component code. This code, hereinafter C, is selected to have a codeword length of 2m symbols, r of which are parity symbols. As illustrated in
In light of this notation, for illustrative purposes consider the example sub-division of a symbol block as shown in
The entries of the symbol block B0 are set to predetermined symbol values. For i≥1, data symbols, specifically m(m−r) such symbols, which could include information that is received from a streaming source for instance, are arranged or distributed into Bi,L by mapping the symbols into Bi,L. Then, the entries of Bi,R are computed. Thus, data symbols from a symbol stream are mapped to data symbol positions Bi,L in a sequence of two-dimensional symbol blocks B and coding symbols for the coding symbol positions Bi,R in each symbol block are computed.
In computing the coding symbols according to one example embodiment, an m by (2m−r) matrix, A=[Bi−1T Bi,L], where Bi−1T is the matrix-transpose of Bi−1, is formed. The entries of Bi,R are then computed such that each of the rows of the matrix [Bi−1T Bi,R] is a valid codeword of C. That is, the elements in the jth row of Bi,R are exactly the r coding symbols that result from encoding the 2m−r symbols in the jth row of A.
Generally, the relationship between successive blocks in a staircase code satisfies the following relation: For any i≥1, each of the rows of the matrix [Bi−1T Bi] is a valid codeword of C.
An equivalent description of staircase codes, from which their name originates, is suggested in
Consider the first two-block column that spans the first column of B1 404 and the first column of B2T 406. In the example computation described above, the coding symbols for the first row of B2 would be computed such that [B1T B2,L B2,R] is a valid codeword of C. Since the first column of B1 404 would be the first row in B1T, and similarly the first column of B2T406 would be the first row of B2, the staircase structure 400 is consistent with the foregoing example coding symbol computation.
Therefore, it can be seen that coding symbols for a block Bi could be computed row-by-row using corresponding rows of Bi−1T and Bi, as described above. A column-by-column computation using corresponding columns of Bi−1 and BiT would be equivalent. Stated another way, coding symbols could be computed for the coding symbol positions in each symbol Bi, where i is a positive integer, in a sequence such that symbols at symbol positions along one dimension (row or column) of the two-dimensional symbol block Bi−1 in the sequence, concatenated with the information symbols and the coding symbols along the other dimension (column or row) in the symbol Bi, form a codeword of a FEC component code. In a staircase code, symbols at symbol positions along the one dimension (row or column) of the symbol block Bi in the sequence, concatenated with the information symbols and the coding symbols along the other dimension (column or row) in the symbol block Bi+1, also form a codeword of the FEC component code.
The two dimensions of the symbol blocks in this example are rows and columns. Thus, in one embodiment, the concatenation of symbols at symbol positions along a corresponding column and row of the symbol blocks Bi−1 and Bi, respectively, forms a codeword of the FEC component code, and the concatenation of symbols at symbol positions along a corresponding column and row of the symbol blocks Bi and Bi+1, respectively, also forms a codeword of the FEC component code. The “roles” of columns and rows could instead be interchanged. The coding symbols could be computed such that the concatenation of symbols at symbol positions along a corresponding row and column of the symbol blocks Bi−1 and Bi, respectively, forms a codeword of the FEC component code, and the concatenation of symbols at symbol positions along a corresponding row and column of the symbol blocks Bi and Bi+1, respectively also forms a codeword of the FEC component code.
In the examples above, a staircase code is used to encode a sequence of m by m symbol blocks. The definition of staircase codes can be extended to allow each block Bi to be an n by m array of symbols, for n≥m. As shown in
The (n−m) supplemental rows or columns which are added to form the Di matrices in this example are added solely for the purposes of computing coding symbols. The added rows or columns need not be transmitted to a decoder with the data and coding symbols, since the same predetermined added symbols can also be added at the receiver during decoding.
The example method 600 is intended solely for illustrative purposes. Variations of the example method 600 are contemplated.
For example, all data symbols in a stream need not be mapped to symbol blocks at 602 before coding symbols are computed at 604. Coding symbols for a symbol block could be computed when the data symbol positions in that symbol block have been mapped to data symbols from the stream, or even as each row or column in a symbol block, depending on the computation being used, is mapped. Thus, the mapping at 602 and the computing at 604 need not strictly be serial processes in that the mapping need not be completed for an entire stream of data symbols before computing of coding symbols begins.
The mapping at 602 and/or the computing at 604 could involve operations that have not been explicitly shown in
Embodiments have been described above primarily in the context of code structures and methods.
The interfaces 802, 812, the transmitter 806, and the receiver 808 represent components that enable the example apparatus 800 to transfer data symbols and FEC encoded data symbol blocks. The structure and operation of each of these components is dependent upon physical media and signalling mechanisms or protocols over which such transfers take place. In general, each component includes at least some sort of physical connection to a transfer medium, possibly in combination with other hardware and/or software-based elements, which will vary for different transfer media or mechanisms.
The interfaces 802, 812 enable the apparatus 800 to receive and send, respectively, streams of data symbols. These interfaces could be internal interfaces in a communication device or equipment, for example, that couple the FEC encoder 804 and the FEC decoder 810 to components that generate and process data symbols. Although labelled differently in
The FEC encoder and the FEC decoder could be implemented in any of various ways, using hardware, firmware, one or more processors executing software stored in computer-readable storage, or some combination thereof. Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), and microprocessors for executing software stored on a non-transitory computer-readable medium such as a magnetic or optical disk or a solid state memory device, are examples of devices that might be suitable for implementing the FEC encoder 804 and/or the FEC decoder 810.
In operation, the example apparatus 800 provides for FEC encoding and decoding. As noted above, however, encoding and decoding could be implemented separately instead of in a single apparatus as shown in
The FEC encoder 804 maps data symbols, from a stream of data symbols received by the interface 802 from a streaming source for instance, to data symbol positions in a sequence of two-dimensional symbol blocks Bi. As described above, each symbol block includes data symbol positions and coding symbol positions. The FEC encoder computes coding symbols for the coding symbol positions in each symbol block Bi, in the sequence such that, for each symbol block B that has a preceding symbol block Bi−1 and a subsequent symbol block Bi+1 in the sequence, symbols at symbol positions along one dimension of the preceding symbol block Bi−1, concatenated with the data symbols and the coding symbols along the other dimension in the symbol Bi, form a codeword of a FEC component code, and symbols at symbol positions along the one dimension of the symbol block B concatenated with the data symbols and the coding symbols along the other dimension in the subsequent symbol block Bi+1, form a codeword of the FEC component code. FEC encoded data symbols could then be transmitted over a communication medium by the transmitter 806.
FEC decoding is performed by the FEC decoder 810, on a sequence of FEC encoded two-dimensional symbol blocks Bi. These signal blocks are received through an interface, which in the example apparatus 800 would be the receiver 808. Each of the received symbol blocks includes received versions of data symbols at data symbol positions and coding symbols at coding symbol positions. The coding symbols for the coding symbol positions in each symbol block Bi in the sequence would have been computed at a transmitter of the received symbol blocks. The transmitter might be a transmitter 806 at a remote communication device or equipment. The coding symbol computation at the transmitter is such that, for each symbol block Bi that has a preceding symbol block Bi−1 and a subsequent symbol block Bi+1 in the sequence, symbols at symbol positions along one dimension of the preceding symbol block Bi−1, concatenated with the data symbols and the coding symbols along the other dimension in the symbol Bi, form a codeword of a FEC component code, and symbols at symbol positions along the one dimension of the symbol Bi, concatenated with the data symbols and the coding symbols along the other dimension in the subsequent symbol block Bi+1, form a codeword of the FEC component code. The FEC decoder 810 decodes the received FEC encoded symbol blocks.
Operation of the FEC encoder 804 and/or the FEC decoder 810 could be adjusted depending on expected or actual operating conditions. For example, where a particular application does not require maximum coding gain, a higher latency coding could be used to improve other coding parameters, such as burst error correction capability and/or error floor. In some embodiments, high coding gain and low latency are of primary importance, whereas in other embodiments different coding parameters could take precedence.
Other functions might also be supported at encoding and/or decoding apparatus.
In operation, the OTUk frame generator 902 generates frames that include data and parity information positions. Data symbols in the OTUk frame data positions are mapped to data positions in the blocks B as described above. Coding symbols are then computed by the FEC encoder 906 and used to populate the parity information positions in the OTUk frames generated by the OTUk frame generator 902 in the example shown. At the receive side, the OTUk framer 908 receives signals over the optical channel and delineates OTUk frames, from which data and parity symbols are demapped by the demapper 910 and used by the FEC decoder 912 in decoding.
Examples of staircase FEC codes, encoding, and decoding have been described generally above. More detailed examples are provided below. It should be appreciated that the following detailed examples are intended solely for non-limiting and illustrative purposes.
As a first example of a G.709-compatible FEC staircase code, consider a 512×510 staircase code, in which each bit is involved in two triple-error-correcting (1022, 990) component codewords. The parity-check matrix H of this example component code is specified in Appendix B.2, The assignment of bits to component codewords is described by first considering successive two-dimensional blocks Bi, i≥0 of binary data, each with 512 rows and 510 columns. The binary value stored in position (row,column)=(j,k) of Bi is denoted di{j,k}.
In each such block, information bits are stored as di{j,k}, 0≤j≤511, 0≤k≤477, and parity bits are stored as {j,k}, 0≤j≤511, 478≤k≤509. The parity bits are computed as follows:
For row j, 0≤j≤1, select di{j,478}, di{j,479}, . . . , di{j,509}, such that v=[0, 0, . . . 0,di{j,0}, di{j,1}, . . . di{j,509}]
satisfies
HvT=0
For row j, 2≤j≤511, select di{j,478}, di{j,479}, . . . , di{j,509}, such that v=[di−1{0,l}, di−1{1,l}, . . . di−1{511,l}, di{j,0}, di{j,1}, . . . , di{j,509}]
satisfies
HvT=0
where l=Π(j−2) and n is a permutation function specified in Appendix B.1.
The information bits in block Bi in this example map to two G.709 OTUk frames, i.e., frames 2i and 2i+1. The parity bits for frames 2i and 2i+1 are the parity bits from block Bi−1. The parity bits of the two OTUk frames into which the information symbols in symbol block B1 are mapped can be assigned arbitrary values.
As shown in
di{m mod 512·└m/512┘},30592l≤m≤30592l+30591 Information
di−1{m mod 512,478+└m/512┘},2048l≤m≤2048l+2047 Parity
The precise assignment of bits to frames, as a function of l, is as follows:
Frame 2i, row 1: l=0
Frame 2i, row 1: l=1
Frame 2i, row 1: l−2
Frame 2i, row 1: l=3
Frame 2i+1, row 1: l=4
Frame 2i+1, row 1: l=5
Frame 2i+1, row 1: l=6
Frame 2i+1, row 1: l=7.
In another example, each bit in a 196×187 staircase code is involved in two triple-error-correcting (383, 353) component codewords. The parity-check matrix H of the component code is specified in Appendix C.2. The assignment of bits to component codewords is again described by first considering successive two-dimensional blocks Bi,i≥0, of binary data, each with 196 rows and 187 columns. The binary value stored in position (row,column)=(j,k) of Bi is denoted di{j,k}.
In each such block, information bits are stored as di{j,k}, 0≤j≤195, 0≤k≤156, and parity bits are stored as {j,k}, 0≤j≤195, 157≤k≤186. The parity bits are computed as follows:
For row j, 0≤j≤8, select di{j,157}, di{j,158}, . . . , di{j,186}, such that v=[0, 0, . . . , 0,di{j,0}, di{j,1}, . . . , di{j,186}]
satisfies
HvT=0.
For row j, 9≤j≤195, select di{j,157}, di{j,158}, . . . , di{j,186}, such that v=[di−1{0,l}, di−1{1,l}, . . . , di−1{195,l}, di{j,0}, di{j,1}, . . . , di{j,186}]
satisfies
HvT=0.
where l=Π(j−9), and Π is a permutation function specified in Appendix C.1.
The information bits in block Bi to one OTUk row. The parity bits for row i are the parity bits from block Bi−1; although there are four rows per OTUk frame, all rows could be numbered consecutively, ignoring frame boundaries. The information and parity bits to be mapped to row i, and their specific order of transmission, are specified as follows:
di{m mod 196·└m/196┘},180≤m≤30771 Information
di−1{m mod 196,157+└m/196┘},0≤m≤5879. Parity
In this example, eight dummy bits are appended to the end of the parity stream to complete an OTUk row. Furthermore, the first 180 bits in the first column of each staircase block are fixed to zero, and thus need not be transmitted.
Syndrome-based iterative decoding can be used to decode a received signal. Generation of the syndromes is done in a similar fashion to the encoding. The resulting syndrome equation could be solved using a standard FEC decoding scheme and error locations are determined. Bit values at error locations are then flipped, and standard iterative decoding proceeds.
The latency of decoding is a function of number of blocks used in the decoding process. Generally, increasing the number of blocks improves the coding gain. Decoding could be configured in various latency modes to trade-off between latency and coding gain.
What has been described is merely illustrative of the application of principles of embodiments of the invention. Other arrangements and methods can be implemented by those skilled in the art without departing from the scope of the present invention.
For example, the divisions of functions shown in
As noted above, the roles of columns and rows in a staircase code could be interchanged. Other mappings between bits or symbols in successive blocks might also be possible. For instance, coding symbols in a block that is currently being encoded could be determined on the basis of symbols at symbol positions along a diagonal direction in a preceding block and the current block. In this case, a certain number of symbols could be selected along the diagonal direction in each block, starting at a certain symbol position (e.g., one corner of each block) and progressing through symbol positions along different diagonals if necessary, until the number of symbols for each coding computation has been selected. As coding symbols are computed, the process would ultimately progress through each symbol position along each diagonal. Other mappings might be or become apparent to those skilled in the art.
In addition, although described primarily in the context of code structures, methods and systems, other implementations are also contemplated, as instructions stored on a non-transitory computer-readable medium, for example.
For a root α of the primitive polynomial p(x)=1+x3+x10, the non-zero field elements of GF(210) can be represented as
αi,0≤i≤1022,
which we refer to as the “power” representation. Equivalently, we can write
αi=b9α9+b8α8+ . . . +b0,0≤i≤1022;
we refer to the integer l=b929+b828+ . . . +b0 as the “binary” representation of the field element. We further define the function log(⋅) and its inverse exp(⋅) such that for l, the binary representation of αi, we have
log(l)=i
and
exp(i)=l.
B.1—Specification of Π
Π is a permutation function on the integers i, 0≤i≤509. In the following, Π(M:M+N)=K:K+N is shorthand for Π(M)=K, Π(M+1)=K+1, . . . , Π(M+N)=K+N. The definition of Π is as follows:
Π(0:7)=478:485 Π(8)=0 Π(9:11)=486:488 Π(12)=1
Π(13)=489 Π(14:16)=2:4 Π(17:19)=490:492 Π(20)=5
Π(21)=493 Π(22:24)−6:8 Π(25)=494 Π(26:32)−9:15
Π(33:35)=495:497 Π(36)=16 Π(37)=498 Π(38:40)=17:19
Π(41)=499 Π(42:48)=20:26 Π(49)=500 Π(50:64)=27:41
Π(65:67)=501:503 Π(68)=42 Π(69)=504 Π(70:72)=43:45
Π(73)=505 Π(74:80)=46:52 Π(81)=506 Π(82:128)=53:99
Π(129)=507 Π(130)=100 Π(131)=508 Π(132:256)=101:225
Π(257)=509 Π(258:509)=226:477
B.2—Parity-Check Matrix
Consider the function ƒ which maps an integer i, 1≤i≤1023, to the column vector
where
βi=αlog(i), and
F(βi)=b2l
for l the binary representation of βi, and
C.1—Specification of Π
Π is a permutation function on the integers i, 0≤i≤186. In the following, Π(M:M+N)=K:K+N is shorthand for Π(M)=K,Π(M+1)=K+1, . . . , Π(M+N)=K+N. The definition of Π is as follows:
Π(0:7)=157:164 Π(8)=0 Π(9:11)=165:167 Π(12)=1
Π(13)=168 Π(14:16)=2:4 Π(17:19)=169:171 Π(20)=5
Π(21)=172 Π(22:24)−6:8 Π(25)=173 Π(25:32)−9:15
Π(33:35)=174:176 Π(36)=16 Π(37)=177 Π(33:40)=17:19
Π(41)=178 Π(42:48)=20:26 Π(49)=179 Π(53:64)=27:41
Π(65:67)=180:182 Π(68)=42 Π(69)=183 Π(73:72)=43:45
Π(73)=184 Π(74:128)=46:100 Π(129)=185 Π(130)=101
Π(131)=186 Π(131:186)=102:156
C.2—Parity-Check Matrix
Consider the function ƒ which maps an integer i, 315≤i≤697, to the column vector
where
βi=αlog(i).
for l the binary representation of βi. Then,
H=ƒ(315)ƒ(316)ƒ(317) . . . ƒ(697)].
This application is a continuation of U.S. patent application Ser. No. 14/266,299 filed on Apr. 30, 2014, which is a continuation of U.S. patent application Ser. No. 13/085,810 filed on Apr. 13, 2011, the contents both of which are incorporated in their entirety herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5719884 | Roth et al. | Feb 1998 | A |
6810499 | Sridharan et al. | Oct 2004 | B2 |
7509563 | Ilani | Mar 2009 | B2 |
8065585 | Abbaszadeh et al. | Nov 2011 | B1 |
8276047 | Coe | Sep 2012 | B2 |
8335962 | Eberlein et al. | Dec 2012 | B2 |
20050251723 | Ilani | Nov 2005 | A1 |
20070143659 | Ball | Jun 2007 | A1 |
20070269218 | Zhang | Nov 2007 | A1 |
20070288833 | Djurdjevic et al. | Dec 2007 | A1 |
20080104479 | Lablans | May 2008 | A1 |
20090208218 | Xiao et al. | Aug 2009 | A1 |
20100299578 | Shin | Nov 2010 | A1 |
20110116515 | Van Houtum et al. | May 2011 | A1 |
20120183303 | Onohara et al. | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
1111799 | Jun 2001 | EP |
0176080 | Oct 2001 | WO |
2010057011 | May 2010 | WO |
Entry |
---|
Author: Wikipedia, Title: Application-Specific Integrated Circuit, Original Date: Nov. 16, 2002 (Year: 2002). |
E. Martinian, C-E W. Sundberg; “Low Delay Burst Erasure Correction Codes”, 2002, ICC 2002, IEEE International Conference on Communications, (vol. 3), pp. 1736-1740. |
Y.Q. Shi, X.M. Zhang, Z-C Ni & N. Ansari; “Interleaving for Combating Bursts of Errors,” IEEE Circuits and Systems Magazine, First Quarter 2004, pp. 29-42. |
E. Martinian, C-E W. Sundberg; “Burst Erasure Correction Codes With Low Decoding Delay”, IEEE Transactions on Information Theory, vol. 50, Issue 10, Oct. 2004, pp. 2494-2502. |
H. Liu, H. Ma, M. El Zarki & S. Gupta; “Error control schemes for networks: An overview”, Mobile Networks and Applications 2 (1997) pp. 167-182. |
S.J. Johnson & T. Pollock; “LDPC Codes for the Classic Bursty Channel”, International Synposium on Information Theory and its Applications, ISITA 2004, Parma, Italy, Oct. 10-13, 2004, 6 pages. |
A.J. McAuley; “Reliable Broadband Communication Using a Burst Erasure Correcting Code”, Presented at ACM SIGCOMM '90, Philadelphia, Pa. Sep. 1990, pp. 1-10. |
“Upstream FEC Errors and SNR as Ways to Ensure Data Quality and Throughput”, Document ID: 49780, Cisco Systems, Inc. Updated Oct. 4, 2005, 16 pages. |
ITU-T Telecommunication Standardization Sector of ITU, G.975.1 (Feb. 2004), “Series G: Transmission Systems and Media, Digital Systems and Networks”, 58 pages. |
W. Zhang, M. Lentmaier, K. SH. Zigangirov & D.J. Costello “Braided Convolutional Codes: A New Class of Turbo-Like Codes”, IEEE Transactions on Information Theory, (vol. 56, Issue 1), Jan. 2010, pp. 316-331. |
D. Truhachev, M. Lentmaier, K. Zigangirov; “On Braided Block Codes”, ISIT 2003, Yokohama, Japan, Jul. 4, 2003, 1 page. |
K. Zigangirov, A.J. Felstrom, M. Lentmaier, D. Truhachev, “Encoders and Decoders for Braided Block Codes”, ISIT 2006, Seattle, USA, Jul. 9-14, 2006, pp. 1808-1812. |
C.P.M.J. Baggen & L.M.G.M. Tolhuizen; “On Diamond Codes”, IEEE Transactions on Information Theory, vol. 43, No. 5, Sep. 1997, pp. 1400-1411. |
M.C.O. Bogino, P. Cataldi, M. Grangetto, E. Magli, G. Olmo; “Sliding-Window Digital Fountain Codes for Streaming of Multimedia Contents”, ISCAS 2007, IEEE International Symposium on Circuits and Systems, May 27-30, 2007, pp. 3467-3470. |
D. Sejdmovic, D. Vukobratovic, A. Doufexi, V. Senk & R.J. Piechocki; “Expanding Window Fountain Codes for Unequal Error Protection”, IEEE Transactions on Communications, vol. 57, No. 9, Sep. 2009, pp. 2510-2516. |
“Architecture of optical transport networks”, Amendment 1; International Telecommunication Union; ITU-T; G.872 (Dec. 2003), Series G: Transmission Systems and Media, Digital Systems and Networks; Digital networks—Optical Transport Networks. |
“Architecture of optical transport networks”, Amendment 2; International Telecommunication Union; ITU-T; G.872 (Jul. 2010), Series G: Transmission Systems and Media, Digital Systems and Networks; Digital networks—Optical Transport Networks. |
“Interfaces for the Optical Transport Network (OTN)”. International Telecommunication Union; ITU-T; G.709/Y.1331 (Dec. 2009). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital terminal equipments—General; Series Y: Global Information Infrastructure, Internet Protocol Aspects and Next-Generation Networks; Internet protocol Aspects—Transport. |
“Interfaces for the Optical Transport Network (OTN)”. Amendment 1. International Telecommunication Union; ITU-T; G.709/Y.1331 (Jul. 2010). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital terminal equipments—General; Series Y: Global Information Infrastructure, Internet Protocol Aspects and Next-Generation Networks; Internet protocol Aspects—Transport. |
“Interfaces for the Optical Transport Network (OTN)”. Corrigendum 1; International Telecommunication Union; ITU-T; G.709/Y.1331 (Jul. 2010). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital terminal equipments—General; Series Y: Global Information Infrastructure, Internet Protocol Aspects and Next-Generation Networks; Internet protocol Aspects—Transport. |
Architecture of optical transport networks. Corrigendum 1; International Telecommunication Union; ITU-T; G.872 (Jan. 2005). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital networks Optical Transport Networks. |
“Erratum 1 (May 2010) to Recommendation ITU-T, G.608/Y.1331 (Dec. 2009), Interfaces for the Optical Transport Network (OTN)”. Covering Note; General Secretariat of the International Telecommunication Union. Geneva, May 4, 2010. |
“Architecture of optical transport networks”. International Telecommunication Union; ITU-T Recommendation; G.872 (Nov. 2001). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital networks—Optical Transport Networks. |
“Forward error correction for high bit-rate DWDM submarine systems”. Corrigendum 1; International Telecommunication Union; ITU-T; G.975 (Feb. 2006). Series G: Transmission Systems and Media, Digital Systems and Networks; Digital sections and digital line system—Optical fibre submarine cable systems. |
“Forward error correction for high bit-rate DWDM submarine systems”. Telecommunication Standardization Sector; ITU-T; G.875.1; (Feb. 2004); Series G: Transmission Systems and Media Digital Systems and Networks. |
Alberto Jimenez Feltstrom et al, “Braided Block Codes”, IEEE Transactions on Information Theory, vol. 55, No. 6, Jun. 2009, 19 pages. |
Tom Richardson, et al, Modem Coding Theory, Cambridge University Press, Cambridge, New York, 18 pages Published on 2008. |
A.D. Wyner, “Analysis of Recurrent Codes”, IEEE Transactions on Information Theory, 14 pages. Published Jul. 1963. |
Frank R. Kschischang, “Factor Graphs and the Sum Product Algorithm”, IEEE Transactions of Information Theory, vol. 47, No. 2, Feb. 2011, 22 pages. |
U.S. Final Office Action dated Aug. 1, 2013 in respect of U.S. Appl. No. 13/085,810 (20 pages). |
U.S. Office Action dated Mar. 13, 2013 in respect of U.S. Appl. No. 13/085,810 (19 pages). |
Notice of Allowance and Notice of Allowability dated Jan. 31, 2014 in respect of U.S. Appl. No. 13/085,810 (10 pages). |
Notice of Allowance and Notice of Allowability dated Mar. 17, 2016 in respect of U.S. Appl. No. 14/266,299 (15 pages). |
Number | Date | Country | |
---|---|---|---|
20160308558 A1 | Oct 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14266299 | Apr 2014 | US |
Child | 15194432 | US | |
Parent | 13085810 | Apr 2011 | US |
Child | 14266299 | US |