Optical switches typically are implemented for routing optical signals through an optical network such as the optical fiber infrastructure deployed in data centers, fronthaul networks, cross-haul networks, metro networks, and the like. For example, optical nodes deployed in a ring topology can provide connectivity with high throughput and bounded latency/jitter by performing much (if not all) of the processing and packet switching in the optical domain. Conventional optical transmission and switching is performed in an optical network using wavelength division multiplexing (WDM). Optical packet switching (OPS) is a switching technology that allows an optical switch in a node to route an input optical WDM packet to an optical output port without converting the entire WDM packet into an electrical/digital signal. An OPS node receives a WDM packet and converts an optical header in the WDM packet into a digital signal using optical-to-electrical-to-optical (OEO) conversion. Information in the header is used to configure the optical switch to route an optical payload of the WDM packet and schedule the optical payload of the WDM packet for transmission. The optical node does not convert the optical payload into an electrical/digital signal and, consequently, optical switching for the optical payload increases capacity and energy efficiency of the optical network relative to switches or routers in networks that convert the entire packet into an electrical/digital signal.
The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.
Optical networks, particularly optical networks implemented according to Fifth Generation (5G) standards, are required to satisfy stringent latency requirements. One approach to satisfying the latency requirements is “deterministic networking.” Packet arrival times and latencies are known accurately in advance in a deterministic network. One deterministic networking technique is time-aware shaping of packets that are scheduled for transmission by a transmission scheduler that selects packets for scheduling from a set of ingress queues. A gate control list (GCL) identifies the ingress queues that are considered by the transmission scheduler in a sequence of time intervals that are referred to as traffic windows. The pattern of ingress queues that are considered in each traffic window is referred to as a gate control entity (GCE). The GCL is therefore a list of GCEs for the sequence of traffic windows. Different flows are mapped to different ingress queues. The GCL defines time-aware traffic windows in which only packets from ingress queues corresponding to specified flows are allowed to be transmitted. For example, the GCL can be configured so that only a first queue associated with a first flow is considered by the scheduler in a time window that corresponds to the time that a first frame in the first flow is expected to arrive in the first queue. All other queues are closed by the GCL in that time window. The scheduler then schedules the only available frame—the first frame in the first queue—for transmission, thereby avoiding collisions and the resulting transmission delays.
As discussed above, conventional optical switches are configured to route and schedule optical packets based on information included in an optical header of the optical packet. Implementing deterministic networking in a conventional optical network therefore requires OEO conversion of at least the header of the optical packet. Processing of the optical packet, including the OEO conversion needed to extract routing/scheduling information from the optical header of the optical packet, increases the latency of the packet and increases uncertainty in the processing time. The increased latency can render transmission of the optical packet non-deterministic, which is contrary to the goal of reducing network latency and nullifying jitter using deterministic networking. These problems are exacerbated when the optical network is heavily loaded because the scheduling policies for allocating the limited resources of the optical network result in higher latency and lower throughput. Furthermore, failure of one or more of the optical nodes results in the loss of optical packets and the resulting recovery time significantly degrades the average throughput of the optical network.
Some embodiments of the optical node partition optical packets into first and second portions that are encoded prior to transmitting the first and second portions to other optical nodes in the optical network. Different encodings are applied in different time intervals. Each round includes an initial time interval and subsequent time intervals that are partitioned into even time intervals (e.g., the second, fourth, etc.) and odd time intervals (e.g., the third, fifth, etc.). The optical node encodes the first and second portions, e.g., using the XOR implemented by the optical encoder, and transmits the encoded signal into the optical network in a first direction (such as clockwise) and a second direction (such as counterclockwise). Subsequently, in even time intervals, the first portion is encoded with optical signals received from the second direction and the encoded signal is transmitted in the first direction. The second portion is encoded with optical signals received from the first direction and the encoded signal is transmitted in the second direction. In odd time intervals, the second portion is encoded with optical signals received from the second direction and the encoded signal is transmitted in the first direction. The first portion is encoded with optical signals received from the first direction and the encoded signal is transmitted in the second direction. If no link failures have occurred, optical signal transmitted by the optical node is recovered by the end of the round by all other optical nodes in the optical network, and the optical node can recover optical signals transmitted by the other optical nodes during the round. In the event of one or more link failures, buffered values of linear combinations of the received signals are used to recover some or all of the lost packets. Thus, decreased latency and increased resilience to link failures are purchased at the cost of additional computational decoding complexity in the optical node.
The OPS nodes 105 are arranged in the ring topology to provide connectivity with high throughput and bounded latency/jitter by performing as much processing and packet switching in the optical domain as possible. Resources in the OPS nodes 105 such as the number of WDM wavelengths available to convey data or control signaling are limited. Centralized scheduling and corresponding signaling is therefore used for communication between the OPS nodes 105 to minimize the amount of time spent waiting, the number of collisions between packet flows, and the number of packet flows that are dropped. Scheduling in this manner generally provides a better quality of service. If the observed performance does not satisfy targets for the scheduling policy, an operator of the OPS network 100 can upgrade the OPS nodes 105 by adding more transmission or reception wavelengths for offloading a routing of traffic. However, this approach introduces additional operation and installation costs.
The conventional ring network 200 is configured to support non-blocking flows or, alternatively, no waiting delay or packet drop for the flows. The conventional ring network 200 is supporting three flows 210, 215, 220. The flow 210 (indicated by the dotted line) enters the conventional ring network 200 at the node 201, traverses the nodes 202, 203, and exits the conventional ring network 200 at the node 204. The flow 215 (indicated by the medium dashed line) enters the conventional ring network 200 at the node 204, traverses the node 201 and exits the conventional ring network 200 at the node 202. The flow 220 (indicated by the long dashed line) enters the conventional ring network 200 at the node 203, traverses the node 204, and exits the conventional ring network 200 at node 201. Different types of resource allocation schemes can be applied to support the flows and the different schemes have different advantages and drawbacks.
In one case, the nodes 201-204 do not support optical packet switching and instead the nodes 201-204 use optical circuit switching. The nodes 201-204 therefore support at least three wavelengths to convey the three overlapping flows 210, 215, 220 in different timeslots. Allocating at least three wavelengths allows each of the flows 210, 215, 220 to transmit without any overlapping. For example, a first wavelength can carry the flow 210 and a second wavelength can carry the flow 220. The first wavelength is unavailable between the node 201 and the node 202, while the second wavelength is unavailable between the node 201 and the node 204. Consequently, a third wavelength is activated to support the flow 215, which traverses the conventional ring network 200 between the node 201 and the node 204. This approach supports the flows 210, 215, 220 without blocking, waiting delays, or packet drops. However, the resources of the conventional ring network 200 are highly underutilized.
In another case, the nodes 201-204 implemented as OPS nodes perform wavelength conversion or packet switching. The flows 210, 215, 220 are supported using only two wavelengths because each of the OPS nodes 201-204 can route optical packets in the optical flows 210, 215, 220 to different wavelengths in different timeslots. For example, the node 204 can route the packets in the flow 210 to the first wavelength for the hop to the node 201 and the node 201 can route the packet to the flow 210 to the second wavelength for the hop to the node 202. This approach reduces the number of wavelengths that are needed to support the flows 210, 215, 220 without blocking, waiting delays, or packet drops. However, wavelength conversion or packet switching requires an additional control channel that is transmitted on another wavelength at additional cost. Furthermore, the packets in the flows 210, 215, 220 need to be buffered to provide time to schedule the packets on the appropriate wavelengths. An advanced sub-optimal scheduling policy with buffering might require optical-electronic-optical conversion, which typically increases latency, decreases throughput, and generally degrades performance. These drawbacks are exacerbated if the number of supported wavelengths is lower than an optimal number, e.g., two wavelengths in the illustrated embodiment.
The scheduling and resource allocation problem shown in
The conventional OPS node 300 includes splitters 305, 306, 307, 308 that selectively direct different wavelengths along different routes and combiners 310, 311 that combine optical signals on different wavelengths into a single WDM signal. Fiber delay lines 312, 313 delay optical signals to provide latency needed for control channel processing. The de-multiplexers 315, 316 distribute the different wavelengths to wavelength-dependent packet blockers 320, 321, 322, 323, 324, 325 that can be configured to block or transmit packets on corresponding wavelengths. The multiplexers 330, 331 re-combine signals on the different wavelengths into a single WDM signal. A control channel processor 335 includes receivers 336, 337 that receive control signaling packets and transmitters 338, 339 that transmit control signaling packets generated by the control channel processor 335.
Bridge node hardware 340 includes receivers 341, 342, 343 that receive data packets and one or more transmitters 344 that transmit data packets. The bridge node hardware 340 also includes a set of queues 350, 351, 352 that hold information used to generate optical packets for transmission from the OPS node 300. A scheduler 355 schedules the optical packets in the queues 350-352 based on weights or priorities associated with the queues 350-352. The scheduler 355 does not provide deterministic jitter or latency guarantees.
In operation, the splitter 305 redirects a copy of the control signaling packet from the first WDM optical signal 301 to the receiver 337, as indicated by the dotted line, and the splitter 306 redirects a copy of the data packet from the first WDM optical signal 301 to the receiver 343, as indicated by the dashed line. The splitter 307 redirects a copy of the control signaling packet from the second WDM optical signal 302 to the receiver 336, as indicated by the dotted line, and the splitter 308 redirects a copy of the data packet from the second WDM optical signal 302 to the receivers 341, 342. The control channel processor 335 processes the control signaling and generates a control signal that is provided to the bridge node hardware 340. The control channel processor 335 also generates control signaling packets that are transmitted from the transmitter 338 to the combiner 311 and from the transmitter 339 to the combiner 310. The transmitter 344 in the bridge node hardware 340 provides optical data packets to the combiner 311. Although not shown in
The control channel, optical packet switching, and scheduling employed in the OPS node 300 support relatively high peak throughput in the ring network. However, the ring network is subject to fluctuations in latency and throughput, as discussed herein.
The incoming links 401, 402 are idle in the time interval 415. The OPS node 400 uses the idle time interval 415 to transmit optical packets from the transmit buffer 410, as indicated by the arrows 417, 418. The incoming links 401, 402 are active in the time interval 420. Optical packets received on the incoming links 401, 402 are stored in the receive buffer 405, as indicated by the arrows 421, 422. Depending on the traffic demand, copies of the optical packets received on the incoming links 401, 402 are also provided transparently to the outgoing links 403, 404 corresponding to the direction of the incoming links 401, 402. For example, optical packets received on the incoming link 402 are provided to the outgoing link 403, as indicated by the arrow 423, and optical packets received on the incoming link 401 are provided to the outgoing link 404, as indicated by the arrow 424.
Selective routing of the optical packets through the conventional OPS node 400 can be represented as pseudocode using the following notation:
Link 402 for the right side of incoming packet flow is denoted as Lnr. The received transient packet at a given timeslot is denoted as PL
Link 403 for the left side of outgoing packet flow is denoted as L′nr. The transmittable packet of OPS node (if available in the electronic transmit buffer) at a given timeslot is denoted by pn1.
Link 404 for the right side of outgoing flow is denoted as L′nr. The transmittable packet of OPS node (if available in the electronic transmit buffer) at a given timeslot is denoted by pn2.
The following pseudocode also assumes that 1) a standard time synchronization mechanism over control channel is available and 2) each “round” represents a fixed number of timeslots pre-agreed by all network nodes.
As discussed above, conventional OPS nodes in ring networks, such as the node 300 shown in
Some embodiments of the modified OPS node include an optical encoder that is configured to perform algebraic summation operations in the optical domain, e.g., utilizing optical XOR elements. The optical encoder combines transient traffic coming from both directions and forwards the encoded optical packets to the outputs during the next timeslot. Incorporating the optical encoder removes the need for routing or time/spatial division multiplexing by scheduling the optical packets across multiple timeslots or multiple wavelengths. Some embodiments of the modified OPS nodes also include an electrical decoder that decodes the incoming signal by solving a set of linear equations that are determined based on the ring topology. One benefit of using the optical encoder is that the transient traffic between nodes in the OPS-based ring network does not need to go through an optical-electrical-optical conversion because the OPS nodes perform the encoding in the optical domain, which allows the OPS node to rapidly process the transient traffic in a non-blocking manner, therefore resulting in higher throughputs. A communication protocol for the modified OPS node allows the OPS nodes in the ring network to encode and decode optical packets. The communication protocol guarantees deterministic jitter and delay by coordinating the OPS nodes to improve recovery and decoding of the received optical signals after a fixed number of rounds, which increase the reliability of the network, as well as improving throughput.
In the following description of some embodiments, the structure and operation of OPS nodes are discussed in the context of a bidirectional ring network of N nodes represented by a set =1,2,3, . . . N. Nodes transmit packets to other nodes in the ring network, receive their intended packets, or relay packets to neighbor nodes thus allowing transient traffic to reach its destination at one of the other nodes in the ring network. Each node n in the ring network includes bidirectional (transmission/reception or outgoing/incoming) links. The following notation is used to indicate the links and corresponding directions:
The ring network has optical packet switching capabilities that can operate in multiple wavelengths in the optical domain without need of optical-electrical-optical conversion. However, in the interest of clarity, the following discussion focuses on single wavelength communication without loss of generality.
Nodes include a transmit buffer and a receive buffer in the electronic domain. Packets in the transmit buffer are forwarded to the optical domain for transmission. In some cases, packets are converted from the optical domain to the electronic domain via the receive buffer. The packets in the electronic receive buffer of a node are sent to upper layers of the node for further processing.
All network nodes are synchronized in time, where each transmission (or reception) is done in a time block/round that can take up to T timeslots. In its circuit, a node is allowed to insert a packet from the electronic domain to the optical domain in any timeslot, as long as the optical slot is empty or insertion of the packet does not cause a transient packet to be dropped. As used herein, the term “transient packet” refers to a packet that is received at a node but is not destined for the node. The receiving node is only responsible for forwarding the transient packet around the ring network. Nodes also receive optical packets in any timeslot by converting the optical packet to an electronic packet and storing the electronic packet in the receive buffer for further processing. The node decodes electronic packets that are destined for the node.
The worst-case demand scenario for a ring network composed of OPS nodes is a broadcast demand profile in which all nodes transmit and receive from all other nodes. However, the ring network can operate in other demand profiles such as peer-to-peer, server-client, multicast, and the like. Simulation results for the broadcast, peer-to-peer, server-client, and multicast demand profiles are provided below. The broadcast demand profile is discussed in detail below.
In broadcast demand profile, each round consists of max. T timeslots. Initially, each node n intends to broadcast its packet Pn to all other nodes in the ring network, which is denoted as ={m ∈
|m≠n}. In this situation, the ring network is fully loaded during T timeslots because all nodes have to transmit/receive/forward packets until all packet demands are satisfied/delivered. Each node inserts a new packet at the first timeslot of a communication round and new packet insertions during the other slots of the round are not possible because the links are fully occupied. Inserting new packets in this situation would cause packet drops. A new packet could be inserted by performing optical-electrical-optical conversion with buffering, which would cause unpredictable delays and jitter due to sub-optimal scheduling mechanisms. In the illustrated embodiments, the broadcast demand profile forces the ring network to be fully loaded in a single communication round. Packet insertion in low-loaded demand scenarios (e.g., multicast) is evaluated below.
For a given node n at given round, the packet Pn is divided into two subpackets such as Pn1 and Pn2. Conventional OPS nodes such as the OPS node 300 shown in
The outgoing links from nodes in the network are subject to failures. In order to estimate the effect of network link failures, a failure event of an outgoing link is drawn from an i.i.d. Bernoulli random variable with failure probability pfail and success probability 1−pfail during each timeslot. In the event of failure, an outgoing link provides no packet to its neighbours in their current timeslot. In some cases, the node re-transmits the packet in the next timeslots. The performance of a state-of-the-art baseline communication scheme (e.g., using a conventional OPS node) is compared to the performance of a modified OPS node such as the OPS node 500 discussed below with regard to
In a scenario where pfail=0 and T is at least N-1, the performance of the modified OPS node is always better, at the cost of introducing additional encoding/decoding computational complexity. In a real-world scenario, queueing delays in the conventional method, optical-electrical-optical conversion delays, and link failures reduce effective throughput and cause more delay and jitter in the ring network. In contrast, the modified OPS node increases performance of the ring network using packet encoding to create “memory” of the packet flows and by fully utilizing the links in both directions, thanks to its special encoding and decoding mechanism.
The OPS node 500 includes splitters 505, 506 that selectively direct different wavelengths along different routes. Node hardware 510 generates information included in optical packets that are transmitted by the OPS node 500. The node hardware 510 includes buffers 511, 512 and corresponding schedulers 513, 514 that provide optical packets from the buffers 511, 512 to the transmitters 515, 516. In the illustrated embodiment, the schedulers 513, 514 use first-in-first-out (FIFO) scheduling to schedule optical packets for transmission by the corresponding transmitters 515, 516. Thus, scheduling is provided with no jitter (or jitter below a predetermined tolerance) and a fixed latency. The node hardware 510 includes receivers 520, 521, 522 to receive encoded optical signals from neighboring OPS nodes and a decoder 525 to decode the encoded optical signals using matrix operations determined by the ring topology, as discussed herein. Optical amplifiers 530, 531 amplify optical signals prior to transmission from the OPS node 500 onto the ring network.
The OPS node 500 includes optical encoders 535, 536 that encode the optical signals received from the splitters 505, 506 and signals generated by the node hardware 510. The optical encoder 535 includes demultiplexers 540, 541, 542, multiplexer 545, and optical arithmetic units (OAUs) 550, 551, 552. The optical encoder 536 includes demultiplexers 555, 556, 557, multiplexer 560, and optical arithmetic units (OAUs) 565, 566, 567. The demultiplexers 540-542, 555-557 split WDM optical signals into their component wavelength channels and the OAUs 550-552, 565-567 encode the optical signals by combining the optical signals on the different channels. Some embodiments of the OAUs 550-552, 565-567 encode the optical signals using XOR operations. The multiplexers 545, 560 combine the optical signals on the different channels into a WDM optical signal for transmission to a neighboring OPS node in the ring network. Although three wavelengths (and corresponding numbers of demultiplexers and OAUs) are shown in
Some embodiments of the OPS node 500 use the following communication protocol to combine or encode incoming optical packets with optical packets generated by the OPS node 500. For example, locally generated optical packets are encoded with incoming optical packets using bitwise XOR operator (denoted as ⊕) in an algebraic way. The OPS node 500 and neighbor OPS nodes store received encoded packets in a receive buffer and decode the packets during or at the end of each communication round, depending on the communication mode. Thus, each OPS node combines the packets incoming from both directions in different timeslots, injects its own packet if needed, and transmits the encoded packets in both directions. The proposed scheme always utilizes its link and creates combinations of packet in the optical domain, thus creating a network with memory which is less prone to link errors, at the cost of computational decoding complexity.
The following pseudocode represents one embodiment of the communication protocol:
During an initial time interval 605, a packet generated by the OPS node is extracted from the transmit buffer 610 and partitioned into a first portion and a second portion. The first and second portions are encoded by combining the portions using an XOR operation. The encoded optical packets are then provided to the outgoing links 603, 604, as indicated by the arrows 614, 615. The encoded optical packets are also provided to the receive buffer 605, as indicated by the arrows 616, 617. Optical packets received on the incoming links 601, 602 are stored in the receive buffer 605, as indicated by the arrows 618, 619.
During an even time interval 620, packets received on the incoming links 601, 602 are provided to the receive buffer 605, as indicated by the arrows 621, 622. The packets received on the incoming links 601, 602 are also provided to encoders (or OAUs) 630, 635, as indicated by the arrow 623, 624. Packets generated by the OPS node 600 are transmitted from the transmit buffer 610. The first portion of the packet is transmitted to the encoder 630, as indicated by the arrow 625. The encoder 630 combines the first portion of the packet with the packet received on the incoming link 602 and the combined (encoded) packet is provided to the output link 603, as indicated by the arrow 626. The second portion of the packet is transmitted to the encoder 635, as indicated by the arrow 627. The encoder 635 combines the second portion of the packet with the packet received on the incoming link 601 and the combined (encoded) packet is provided to the output link 604, as indicated by the arrow 628.
During an odd time interval 640, packets received on the incoming links 601, 602 are provided to the receive buffer 605, as indicated by the arrows 641, 642. The packets received on the incoming links 601, 602 are also provided to encoders (or OAUs) 630, 635, as indicated by the arrow 643, 644. Packets generated by the OPS node 600 are transmitted from the transmit buffer 610. The second portion of the packet is transmitted to the encoder 630, as indicated by the arrow 645. The encoder 630 combines the second portion of the packet with the packet received on the incoming link 602 and the combined (encoded) packet is provided to the output link 603, as indicated by the arrow 646. The first portion of the packet is transmitted to the encoder 635, as indicated by the arrow 647. The encoder 635 combines the first portion of the packet with the packet received on the incoming link 601 and the combined (encoded) packet is provided to the output link 604, as indicated by the arrow 648.
At decision block 715, the OPS node determines whether the current time interval is the initial time interval of the round. If so, the method flows to block 720 and packets are encoded for transmission according to the first encoding/transmission mode. In some embodiments, the first encoding/transmission mode includes combining the first and second portions of the optical packet using an XOR operation and transmitting copies of the encoded optical packet in a first direction and a second direction into the ring network that includes the OPS node. If the current time interval is not the initial time interval of the round, the method 700 flows to decision block 725.
At decision block 725, the OPS node determines whether the current time interval is an even time interval. If so, the method 700 flows to block 730 and packets are encoded for transmission according to the second encoding/transmission mode. In some embodiments, the second encoding/transmission mode includes combining the first portion with an optical packet that is received from a neighbor OPS node in the second direction. This encoded packet is transmitted into the ring network in the first direction. The second encoding/transmission mode also includes combining the second portion with an optical packet received from the first direction. This encoded packet is transmitted into the ring network in the second direction. If the current time interval is not an even time interval, the method flows to block 735.
At block 735, the current time interval is an odd time interval and packets are encoded for transmission according to the third encoding/transmission mode. In some embodiments, the third encoding/transmission mode includes combining the first portion with an optical packet received from the first direction. This encoded packet is transmitted into the ring network in the first direction. The third encoding/transmission mode also includes combining the second portion with an optical packet received from the second direction. This encoded packet is transmitted into the ring network in the second direction. The method 700 then flows to decision block 740.
At decision block 740, the OPS node determines whether the end of the round has been reached. If so, the method 700 flows to block 745 and the round ends. If the end of the round has not been reached, the method 700 flows to decision block 715 and another iteration is performed for the next time interval of the round.
An example illustrating some embodiments of the communication protocol disclosed herein is discussed below in the context of a ring network having four OPS nodes and operating in a broadcast demand profile with zero link failure, that is pfail=0. A single round consisting of T=3 timeslots is performed. Each node broadcasts a packet to all other nodes in the network:
For a node within the end of a single round, with no link failure, and given broadcast demand profile, the amount of required timeslots in a round is N-1 for complete reception of packets, both for the conventional communication protocol and the modified communication protocol disclosed herein because this is the number of time intervals needed for an intended packet to traverse all other nodes (N-1 nodes). In case of link failures, the network would need more time steps to accomplish its delivery tasks. In low-loaded scenarios with no link failures, the network may require less than (N-1) time steps.
As discussed herein, the packets transmitted by a node and the packets received by the node (from both directions of the ring network) in successive time intervals are stored in a receive buffer.
Table 1 illustrates the contents of the receive buffer at the end of the corresponding timeslots. For example, the packet P42 is sent from right outgoing link of Node 4 and travels through Node 1, Node 2, and Node 3 respectively, according to the ring topology. Once the round is over (at the end of 3 timeslots), all nodes have packets sent form other nodes. For example, the receive buffer of Node 1 stores the following subpackets at the end of the round: {P42, P21, P32, P31, P22, P41}. The receiver in Node 1 is therefore able to decode all packets coming from other nodes in broadcast demand scenario using the contents of the receive buffer.
Table 2 illustrates the contributions to the receive buffers of the OPS nodes in the ring network at the end of each timeslot in the round. Entries in the receive buffer include the encoded packets generated by the OPS nodes in the ring network according to some embodiments of the techniques disclosed herein.
Table 3 illustrates the contents of the receive buffer for node 1 at the end of the round. The entries include incoming encoded packets as well as transmitted and outgoing packets that are recorded at the end of each time interval of the round.
The underlying packet combination and output values (e.g., as shown in Table 3) therefore form a system of linear equations, which can be solved in a deterministic amount of computational time with Gaussian elimination or other methods. The matrices that define the linear combinations of optical packets in the entries of the receive buffer are determined by the ring topology and the encoding scheme and the underlying packet combination is always the same. Computation of packets can be spatially done in modern ASICs/FPGAs, thus eliminating any kind of delay caused by decoding. For example, writing the equations from Table 3 in matrix form produces the following structure:
The above structure can be summarized as Cp=y where the matrix C ∈{0,1}4T×2N is the linear combinations of packets, the column vector p represents symbolic packet variables with size of 2N×1 and the vector y is the output values observed at the receive buffer. The coefficient matrix C is full rank due to our transmission scheme, thus all packets are decodable in this setup.
Numerical studies are used to evaluate various performance metrics under several demand profiles and link error probabilities. In particular, a discrete-time event simulation performs transmission/reception of OPS nodes in an optical ring network, while mimicking link failure events via probabilistic Monte-Carlo trials. Each point in
The performance of above methods are evaluated under the following demand profiles:
For a given method and demand profile in the beginning of each communication round, the nodes in the network start to operate in time, until their intended packet flows from both directions are delivered to the destinations, thus finalizing the communication round by employed method. Each incoming link or outgoing link is assumed to have 10 Gbit/s of capacity and each hop introduces one microsecond of delay due to propagation/transmission/processing. The following performance metrics are then calculated for each methods that have finished its round in certain amount of time steps:
The evolution of these performance metrics with respect to the link failure probability, under all aforementioned demand profiles, is carried out in the following subsections.
In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
A computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
As used herein, the term “circuitry” may refer to one or more or all of the following:
Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.