TASA: a TDM ASA-based optical packet switch

Information

  • Patent Grant
  • 10499125
  • Patent Number
    10,499,125
  • Date Filed
    Wednesday, December 14, 2016
    8 years ago
  • Date Issued
    Tuesday, December 3, 2019
    5 years ago
Abstract
A scalable AWGR-based optical packet switch, called TASA (short for TDM ASA), is presented in this invention. The switch is a modified version of the ASA switch but does not have its drawbacks. The total port count is N2 and each port can transmit up to N packets of different wavelengths simultaneously. This makes the total capacity of the switch close to (N3×bandwidth of one wavelength channel).
Description
TECHNICAL FIELD

The invention relates generally to the design of optical packet switches and methods, and in particular, scalable AWGR-based optical packet switching systems.


BACKGROUND

The energy-per-bit efficiency has become the ultimate capacity limiting factor for future routers and data center networks. People have turned to optics for solution. If the switching can be done optically, the E/O and O/E conversions inside the switch will disappear and tremendous power saving can be achieved. As optical switching devices, AWGRs (Arrayed Wavelength Grating Router) have many advantages. They are passive and consume little or no power. They allow WDM signals and have high per-port capacity. The only problem is that they are limited in size. How to design an AWGR-based scalable switch is an important issue for future datacenters and routers.


One such design is the ASA switch architecture presented in U.S. Pat. No. 9,497,517 and in the paper C-T Lea, “A Scalable AWGR-Based Optical Switch,” IEEE Journal on Lightwave Technology, Vol 33, No 22, November 2015, pp. 4612-4621. This architecture contains an optical switching fabric plus an electronic scheduler. The optical switching fabric consists of three switching stages: AWGRs (Arrayed Wavelength Grating Router), space switches, and AWGRs. It is named ASA for the technologies used in the three stages. The sizes of the AWGRs and the optical space switches used in the architecture are N×N and there are up to N AWGRs in the first and the third stages. This makes the maximum port number in the ASA architecture N2. Each port can send up to N packets of different wavelengths simultaneously. The total capacity of the switch is close to (N3×bandwidth of each wavelength channel).


Although the ASA switch architecture can expand the port count from N to N2, it has two limitations. (a) It still needs an electronic scheduler to coordinate transmissions from all ports. This can be a potential bottleneck. (ii) It performs poorly under certain traffic patterns. In this patent, a modified version of the ASA architecture is presented to fix these problems. The new switch does not need an electronic scheduler and performs equally well under any traffic load.


SUMMARY

The following discloses a summary of the invention of a scalable optical AWGR-based packet switch that does not need an electronic scheduler. The new switch is called TASA (short for TDM ASA) which is a modified version of the ASA switch described in U.S. Pat. No. 9,497,517. A TASA switch has an optical switch fabric, but without an electronic scheduler. The optical switch fabric is similar that used in an ASA switch and consists of three stages: the first-stage and the third-stage portions of the switching fabric comprise a plurality of N×N (N inputs and N outputs) AWGRs (arrayed waveguide grating routers), which are interconnected by a middle stage of N optical space switches of size N×N, where N is an odd integer.


But a TASA switch differs from an ASA switch fabric in one major way: It does not need an electronic scheduler. The optical space switches of a TASA switch operate in a TDM (time division multiplexing) mode with a fixed predetermined connection pattern. This means that the TASA switch does not require an electronic scheduler to coordinate the transmissions from external ports. Which VOQs (virtual output queues) can be activated by a port can be easily determined from the TDM patterns used by the optical space switches in the TASA switch. This removes a potential bottleneck in the ASA architecture.


Using two TASA switches cascaded in tandem, we create a new optical packet switch that has a steady performance under any traffic pattern. The first TASA switch creates an evenly distributed traffic pattern for the second TASA switch. The entire switch is also fault tolerant, while an ASA switch cannot tolerate any failure.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A is an exemplary optical switch with an optical switch fabric and an electronic scheduler.



FIG. 1B is an exemplary optical switch which does not need an electronic scheduler.



FIG. 2 is diagram illustrating N2 flows which can pass through an N×N AWGR.



FIG. 3 is a two-stage AWGR-based switch fabric, where N, the number of wavelengths of the AWGR, is an odd integer.



FIG. 4 is a diagram to illustrate the concept of a slice of flows.



FIG. 5 is an exemplary embodiment of a TASA switch fabric. It is similar to that of an ASA switch fabric. The first and the third stage are constructed with N N×N AWGRs and the middle-stage with N N×N optical space switches, where N is an odd integer. The TASA fabric, however, does not need an electronic scheduler.



FIG. 6 illustrates the TDM controller to control an optical space switch in a TASA switch.



FIG. 7 is an exemplary implementation of TDM connection patterns in a 5×5 optical space switch. These patterns will be repeated every 5 slots.



FIG. 8A illustrates an optical switch containing two TASA optical switch fabrics cascaded in tandem. The first switch fabric is used to create an evenly distributed traffic pattern and the second switch fabric routes packets to their original destinations.



FIG. 8B illustrates another implementation of a TASA-based switching system. This implementation uses only one TASA switch fabric. Each packet needs to pass through the TASA switch fabric twice.



FIGS. 9 A-E show that corresponding VOQs that will be activated for transmissions in a given slot based on the fixed TDM connection patterns given in FIG. 7.



FIG. 10 is an exemplary embodiment of the port processor described in FIG. 8.



FIG. 11 illustrates how to select VOQs in each input port processor for transmission in each slot.





DETAILED DESCRIPTION

The subject innovation presents architectures and methods relating to the construction of scalable all-optical AWGR-based packet switches that do not require electronic schedulers. A switching fabric (e.g 121) provides the interconnection function between a plurality of input ports (e.g. 141A) and a plurality of output ports (e.g. 151A). An input port usually divides (‘chops’) an incoming data packet into fixed-length cells before they are sent to the switching fabric. The time for transmitting a cell is called a slot. The various exemplary embodiments of the invention presented herein operate in a ‘cell mode’ (i.e., all data packets being transmitted through the switching fabric have the same packet length), while the terms ‘packet’ and ‘cell’ are used interchangeably herein.


Power consumption is becoming the ultimate bottleneck in the design of a router or a data center network. People have turned to optics for solutions. Switching a signal in the optical domain consumes significantly less power than switching a signal in the electronic domain. But to fully utilize optics' potential of reducing the physical and carbon footprint of a router or a data center network, we must exploit its WDM capability because WDM can increase the overall capacity by thirty or forty times with little additional cost or power consumption. AWGRs (Arrayed Wavelength Grating Routers) provide the most promising solution in this regard.


An N-port AWGR (e.g. 200) operates on a set of N wavelengths (λ0, λ1, . . . , λN−1). A flow in such a device can be categorized by a three-tuple (i,w,o), where i denotes the input, w the wavelength used by the flow, and o the output. The relationship among the three parameters in (i,w,o) is given below:

o=(i+w)mod N.  (1)

From (1) we can see that given any two of the three parameters, the other parameter can be determined automatically. Thus in total there are only N2 flows that can be defined in an N×N AWGR device. Each input can transmit N flows in a given slot and N2 flows in total can traverse the device simultaneously without blocking each other.


Although AWGRs have become the center piece of many proposed optical switches, this technology has one fundamental limitation: poor scalability. Right now, the number of port count of a commercially available AWGR is around 50, but a future datacenter may need a switch with more than a thousand ports.


TASA Pakcet Switch


The ASA switch architecture presented in U.S. Pat. No. 9,497,517 is an AWGR-based optical switch. It can expand port count from N to N2, where N is the port count of an AWGR device. Its design principle is based on the two-stage network 300 shown in FIG. 3, which is a cascade of two AWGRs (i.e 340 and 350). Each flow in the two-stage network is still characterized by a three-tuple (i,w,o). The relationship among the three parameters is now governed by

o=(i+2w)mod N  (2)

Note that in this two-stage network, N must be odd in order to support N2 flows simultaneously. The two-stage network also has the property that given any two parameters in the three-tuple (i,w,o) of a flow, the other parameter can be uniquely determined. The flows passing through a link of link stage 1 (i.e. 320) in FIG. 3 are given below.


Links of Stage 1:

    • line 0 (0, 0, 0) (4, 1, 1) (3, 2, 2) (2, 3, 3) (1, 4, 0)
    • line 1 (1, 0, 1) (0, 1, 2) (4, 2, 3) (3, 3, 4) (2, 4, 0)
    • line 2 (2, 0, 2) (1, 1, 3) (0, 2, 4) (4, 3, 0) (3, 4, 1)
    • line 3 (3, 0, 3) (2, 1, 4) (1, 2, 0) (0, 3, 1) (4, 4, 2)
    • line 4 (4, 0, 4) (3, 1, 0) (2, 2, 1) (1, 3, 2) (0, 4, 3)


      The flows of each row are called a slice in the description below. All N2 flows are divided into N slices, numbered from 0 to N−1 (e.g. slice 0 identified by 410 in FIG. 4). Given i and w of a flow, we can compute its slice number sn as follows:

      sn=(i+w)mod N.  (3)

      Similarly given i and o of a flow, we can also determine the slice number of the flow from (2-3). The shaded numbers above represent the wavelength of a flow. They can be computed from the other two parameters.


      (A) ASA Switch Fabric


The two-stage network in FIG. 3 lays the foundation of the three-stage fabric of the ASA switch. FIG. 5 depicts an embodiment of a 3-stage ASA switching fabric (i.e. 500). There are N N×N AWGRs in the first (i.e. 510 A-E) and the third stage (i.e. 530A-E), and N N×N (N must be odd) optical space switches in the middle stage (e.g. 520A-E). MEMS devices or LiNbO3-based directional couplers can be used to implement the optical space switches.


The address of an input (or output) in an ASA switch is specified by a two-tuple [group, member], where group refers to the AWGR number and member refers to the link of the AWGR to which the input (output) is attached (see FIG. 5). We can still use the member field to define the slice of a flow in an ASA switch. In other words, for flow ([gs, ms], *, [gd, md]), we can use ms, md, and (2-3) to compute the slice number of the flow and the wavelength to be used for the flow:

md=(ms+2w)mod N
sn=(ms+w)mod N.

Under this definition, gs and gd represent the originating and destination AWGR of the flow.


The topology of the ASA architecture dictates that flows of all ith-slices are sent to the ith optical space switch. For example, in the two stage network of FIG. 3, flow (4, *, 1) belongs to the 0th slice, meaning it leaves from the 0th output link of the first AWGR in stage 1 and arrives at the 0th input link of the second AWGR in stage 2. In an ASA switch, ([gs, 4], *, [gd, 1]) is also a flow of the 0th-slice. It leaves AWGR gs of stage 1 from its 0th output link, and arrives at the AWGR gd in stage 3 from its 0th input link. This flow is sent to the 0th optical space switch. A connection between gs and gd must be set up through the 0th optical space switch to allow this flow to pass through.


Since the traffic pattern can change from slot to slot, an electronic scheduler, such as 130 in FIG. 1, is required to schedule the transmissions from different input ports and to configure the optical space switches of the middle stage in each slot. The scheduling is carried out based on the requests sent by input ports to the scheduler. Grants will be sent back by the scheduler for each slot after scheduling is performed. Request and grants are sent through a separate fiber. This means that each port must have two pair of optical fibers: one for the optical switch fabric and one for the electronic scheduler. The electronic scheduler in the ASA architecture consists of two stage of scheduling devices. The complexity of the scheduler is quite high. The scheduling algorithm used inside the scheduler can also become a potential bottleneck.


The ASA architecture has another problem. When traffic is uneven, the performance of an ASA switch may be poor. This is because when N transmitters of an input port are used simultaneously, the transmitted packets must be destined for different output ports. If all traffic from an input port is destined for an output port, the throughput of the switch is only 1/N the throughput of the switch under an evenly distributed traffic.


(B) TASA Switch Fabric


The TASA switching fabric presented in FIG. 6 does not need an electronic scheduler to schedule packet transmissions. Instead, all middle-stage optical space switches are self-controlled and operate on a TDM (time division multiplexing) mode with a fixed predetermined connection pattern in each slot. This optical switching architecture is thus named TASA (short for TDM ASA). The TDM connection pattern of each slot is stored in the TDM Control Memory 620. An exemplary pattern of a given slot is shown in 700-740. These connection patterns repeat after N slots (N=5 in FIG. 7). Note that in our design, all optical space switches of the middle stage use the same pattern in a given slot. The result is that for a given slot, all outputs of an AWGR in stage 1 are connected to the inputs of only one AWGR in stage 3. Thus FIG. 7 also represents how an AWGR in stage 1 is connected to another AWGR in stage 3 (see 710-750 and FIG. 9A-E). For example in slot 1, the first AWGR of stage 1 will be connected to the second AWGR of stage 3 (see 701).


To use a TDM space switch for packet switching, two conditions must be met: (i) traffic is evenly distributed and (ii) the switch size is not too big. Since the size of an optical space switch in the TASA architecture is only N (the total port count is N2), the second requirement is met automatically. A method describe below will make the traffic pattern of a TASA switch fabric evenly distributed even if the original traffic pattern is not.



FIG. 8A shows a switch comprising two TASA switch fabrics cascaded in tandem and a stage of intermediate port processors between the two TASA fabrics. Although input ports and output ports in FIG. 1 are shown separately, in practice they are collocated on the same board (e.g. input port 811A and output port 812 A, input port 811E and output port 812E). In FIG. 8A, an input port sends an incoming packet to a randomly selected intermediate port (e.g. 830A, 830E) regardless of the original destination. Since the intermediate port is randomly chosen, the traffic pattern to the first TASA switch can be considered evenly distributed. Furthermore, this also makes the traffic pattern entering the second TASA switch fabric evenly distributed. Therefore, the first requirement for using TDM space switches can be met in both TASA fabrics in FIG. 8A.



FIG. 8B shows another implementation of the same idea with just one TASA switch fabric. Each packet in FIG. 8B will pass through the TASA switch fabric 870 twice. In the first round, an input port processor (e.g. 850A) will randomly select an output port for the packet. This creates an evenly distributed pattern as described in the cascaded approach. In the second round, an output port processor (e.g. 851A) sends the packet back to its co-located input port processor (i.e. 850A) which will route the packet to its original destination output port. Since each packet is transmitted twice, the throughput of the switch is only 50% the throughput of the switch in FIG. 8A, but the amount of hardware is also reduced to half.


Input/output Port & Methodology


Although input/output ports are external to the TASA switch fabric, the implementation of a port processor is given below to demonstrate the methodology for using the switch in FIG. 8A.



FIG. 10 illustrates an exemplary, non-limiting embodiment of input port 811A and output port 811B. Line card receiver 1010 receives an incoming packet from a line card and randomly generate an immediate port number as the output port address (see Output 1 header shown below). The original output port address is kept in the Output 2 header.
















Output 1 (randomly generated)
Output 2 (original destination)
Data










It then puts the packet in a corresponding VOQ (virtual output queue) which is organized based on output ports.


As described in U.S. Pat. No. 9,497,517, the following properties in TASA will hold. (i) All flows of a slice will use different wavelengths, and (ii) two flows destined for the same output port will automatically use two different slices. Thus the VOQ controller 1021 will launch transmissions for all VOQs allowed by the TDM connection patterns. For example, input port processor 0 will launch transmissions for all VOQs shown in 900 in slot 0, all VQOs shown in 910 in slot 1, all VOQs shown in 920 in slot 2, etc (note that FIG. 9A-E represent the TDM connection patterns). The wavelength to be used for each packet in these VOQs can be computed from Eq. 2. Controller 1021 moves the HOL (head of line) packets of these VOQs into corresponding transmission buffers 1030, one for each wavelength. These packets are sent out to fiber 802 through optical transmitters 1031 and multiplexer 1032.


When packets arrive from the TASA fabric through fiber 803, they will be de-multiplexed by Demux 1080 and converted to electronic signals through optical detector array 1081. Since packets can get transmitted out of sequence by randomly selecting the Output 1 address, the re-sequencer 1060 will put packets from each input port into sequence and store them into the output buffer queue 1050 before they are shipped to the line card attached to this port.


Methodologie



FIG. 11 presents a diagram illustrating an exemplary, non-limiting embodiment of the steps adopted by each input port processor in the optical switch described in FIG. 10.


Step 1: When a packet arrives from a line card, the line-card receiver 1010 inside an input port processor randomly selects an intermediate port (e.g. 830A) as the output of the incoming packet, and puts the packet into a corresponding VOQ 1020.


Step 2: The VOQ control unit 1021 select the HOL (head of line) packets of all VOQs that will be activated in a given slot. The selection is based on the TDM connection pattern stored in the input port processor.


Step 3: The VOQ controller computes the wavelengths of the selected packets and put them into transmission buffers 1030, one for each wavelength. These packets are sent out through optical transmitters 1031.

Claims
  • 1. A switching system comprising: (a) a first optical switch fabric which comprisesa first switching stage comprising m N×N AWGRs, numbered from 0 to m−1, for cyclically routing component wavelengths, each component wavelength carrying a data packet, of first WDM signals received from external processors and sending first routed WDM signals to a second switching stage,the second switching stage comprising N m×m optical space switches, numbered from 0 to N−1, operating in a pre-determined TDM (time division multiplexing) mode to switch the first routed WDM signals to a third switching stage, andthe third switching stage comprising m N×N AWGRs, numbered from 0 to m−1, for cyclically routing component wavelengths of second WDM signals, received from the second switching stage, to a middle port-processor stage;(b) the middle port-processor stage which comprises a plurality of port processors; and(c) a second optical switch fabric which comprisesa forth switching stage comprising m N×N AWGRs, numbered from 0 to m−1, for cyclically routing component wavelengths, each component wavelength carrying a data packet, of third WDM signals received from the plurality of port processors of the middle port-processor stage and sending third routed WDM signals to a fifth switching stage,the fifth switching stage comprising N m×m optical space switches, numbered from 0 to N−1, operating in a pre-determined TDM (time division multiplexing) mode to switch the third routed WDM signals to a sixth switching stage, andthe sixth switching stage comprising m N×N AWGRs, numbered from 0 to m−1, for cyclically routing component wavelengths of forth WDM signals, received from the fifth switching stage, to output ports of the AWGRs of the sixth switching stage;wherein an external processor connected to the first optical switch fabric sends a packet to a randomly selected port processor of the middle port-processor stage through the first optical switch fabric, and the selected port processor of the middle port-processor stage sends the packet to its original destination through the second optical switch fabric.
  • 2. The switching system of claim 1, wherein m<N and addresses of input ports of the first optical switch fabric is represented by a two tuple (group, member), 0≤group≤m−1, 0≤member≤N−1, group being an AWGR number in the first switching stage of the first optical switch fabric and member being an input port number of said AWGR;wherein addresses of output ports of the first optical switch fabric is represented by a two tuple (group, member), 0≤group≤m−1, and 0≤member≤N−1, group being an AWGR number in the third switching stage of the first optical switch fabric and member being an output port of said AWGR;wherein an external processor connected to an input port (g1, m1), g1 being a group value and m1 a member value, of the first optical switch fabric uses wavelength w to transmit data packets destined for a randomly selected port processor of the middle port-processor stage connected to an output port (g2, m2), g2 being a group value and m2 a member value, of the first optical switch fabric, and wherein w is computed from m2=(m1+2w) mod N.
US Referenced Citations (40)
Number Name Date Kind
6288808 Lee Sep 2001 B1
6829401 Duelk Dec 2004 B2
7058062 Tanabe Jun 2006 B2
7110394 Chamdani Sep 2006 B1
7151774 Williams Dec 2006 B1
7212524 Sailor May 2007 B1
7430346 Jennen Sep 2008 B2
7453800 Takajitsuko Nov 2008 B2
7693142 Beshai Apr 2010 B2
8315522 Urino Nov 2012 B2
8705500 Aybay Apr 2014 B1
8792787 Zhao Jul 2014 B1
9401774 Mineo Jul 2016 B1
9497517 Lea Nov 2016 B2
20020024951 Li Feb 2002 A1
20020039362 Fisher Apr 2002 A1
20020075883 Dell Jun 2002 A1
20020085578 Dell Jul 2002 A1
20030081548 Langevin May 2003 A1
20030193937 Beshai Oct 2003 A1
20040165818 Oikawa Aug 2004 A1
20060165070 Hall Jul 2006 A1
20060168380 Benner Jul 2006 A1
20070030845 Hill Feb 2007 A1
20070092248 Jennen Apr 2007 A1
20080159145 Muthukrishnan Jul 2008 A1
20080267204 Hall Oct 2008 A1
20080285449 Larsson Nov 2008 A1
20090003827 Kai Jan 2009 A1
20090028140 Higuchi Jan 2009 A1
20090080895 Zarris Mar 2009 A1
20090175281 Higuchi Jul 2009 A1
20090324243 Neilson Dec 2009 A1
20120243868 Meyer Sep 2012 A1
20130083793 Lea Apr 2013 A1
20130121695 Bernasconi May 2013 A1
20130266309 Binkert Oct 2013 A1
20140255022 Zhong Sep 2014 A1
20150104171 Hu Apr 2015 A1
20150289035 Mehrvar Oct 2015 A1
Foreign Referenced Citations (1)
Number Date Country
1 162 860 Dec 2001 EP
Non-Patent Literature Citations (2)
Entry
Lea, A Scalable AWGR Based Optical Switch, Nov. 2015, IEEE, Pages All Document.
Aly et al, Simulation of Fast Packet Switched Photonic Networks for Interprocessor Communication, Apr. 1991, IEEE, Pages All Document.
Related Publications (1)
Number Date Country
20180167702 A1 Jun 2018 US